Re: Luminous - replace old target-weight tree from osdmap with mgr balancer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Am 11.01.2018 um 13:46 schrieb Sage Weil:
> On Thu, 11 Jan 2018, Stefan Priebe - Profihost AG wrote:
>> Thanks! Can this be done while still having jewel clients?
> 
> Yeah.  If I'm understanding your crush dump properly, the *-target-weight 
> part of the tree isn't doing anything except acting as a place for your 
> custom tool to store the target weights.  Removing this wont' change 
> anything at all.  And the compat weight-set is called 'compat' because it 
> is backwards compatible with old clients.  :)

Thank! It's the tool from Loic - i haven't seen him here for a long time.

See:
http://crush.readthedocs.io/en/latest/

Greets,
Stefan


> sage
> 
>>
>> Stefan
>>
>> Excuse my typo sent from my mobile phone.
>>
>>> Am 10.01.2018 um 22:56 schrieb Sage Weil <sage@xxxxxxxxxxxx>:
>>>
>>>> On Wed, 10 Jan 2018, Stefan Priebe - Profihost AG wrote:
>>>>> Am 10.01.2018 um 22:23 schrieb Sage Weil:
>>>>>> On Wed, 10 Jan 2018, Stefan Priebe - Profihost AG wrote:
>>>>>> k,
>>>>>>
>>>>>> in the past we used the python crush optimize tool to reweight the osd
>>>>>> usage - it inserted a 2nd tree with $hostname-target-weight as hostnames.
>>>>>
>>>>> Can you attach a 'ceph osd crush tree' (or partial output) so I can see 
>>>>> what you mean?
>>>>
>>>> Sure - attached.
>>>
>>> Got it
>>>
>>>>>> Now the quesions are:
>>>>>> 1.) can we remove the tree? How?
>>>>>> 2.) Can we do this now or only after all clients are running Luminous?
>>>>>> 3.) is it enought to enable the mgr balancer module?
>>>
>>> First,
>>>
>>> ceph osd crush weight-set create-compat
>>>
>>> then for each osd,
>>> ceph osd crush weight-set reweight-compat <osd> <optimized-weight>
>>> ceph osd crush reweight <osd> <target-weight>
>>>
>>> That won't move any data but will keep your current optimized weights in 
>>> the compat weight-set where they belong.
>>>
>>> Then you can remove the *-target-weight buckets.  For each osd,
>>>
>>> ceph osd crush rm <osd> <ancestor>-target-weight
>>>
>>> and then for each remaining bucket
>>>
>>> ceph osd crush rm <foo>-target-weight
>>>
>>> Finally, turn on the balancer (or test it to see what it it wants to do 
>>> with the optimize command.)
>>>
>>> HTH!
>>> sage
>>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux