Re: Beta testing crush optimization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(as a conclusion to this thread)

Thanks for testing (and your patience while I was fixing a few bugs). 

I'm glad your cluster is now almost even (+/- 1.5% over/under filled for the OSDs and 0.5% for the hosts). It is better than it was before (+/- 25% over/under filled for the OSDs and 6% for the hosts).

Worst case scenario if a host fails was (before optimization):

        ~over filled %~
~type~                 
device            30.15
host              10.53

After optimization it is down to:

        ~over filled %~
~type~                 
device             7.94
host               4.55

Since you have a full SSD cluster you chose to optimize all at once. It means the incremental approach (--step) was not used.

Cheers

On 05/24/2017 05:01 PM, Loic Dachary wrote:
> 
> 
> On 05/24/2017 04:50 PM, Stefan Priebe - Profihost AG wrote:
>> Hello,
>>
>> great! What means pool 3? Is it just the pool nr from the poll dump / ls
>> command?
> 
> Yes. In the report you sent me, this is the number of the only pool in the cluster.
> 
>>
>> Stefan
>>
>> Am 24.05.2017 um 15:48 schrieb Loic Dachary:
>>> Hi Stefan,
>>>
>>> Thanks for volunteering to beta test the crush optimization on a live cluster :-)
>>>
>>> The "crush optimize" command was published today[1] and you should be able to improve your cluster distribution with the following:
>>>
>>> ceph report > report.json
>>> crush optimize --no-forecast --step 64 --crushmap report.json --pool 3 --out-path optimized.crush
>>> ceph osd setcrushmap -i optimized.crush
>>>
>>> Note that it will only perform a first optimization step (moving around 64 PGs). You will need to repeat this command a dozen time to fully optimize the cluster. I assume that's what you will want to control the workload. If you want a minimal change at each step, you can try --step 1 but it will require more than a hundred steps.
>>>
>>> If you're not worried about the load of the cluster, you can optimize it in one go with:
>>>
>>> ceph report > report.json
>>> crush optimize --crushmap report.json --pool 3 --out-path optimized.crush
>>> ceph osd setcrushmap -i optimized.crush
>>>
>>> Cheers
>>>
>>> [1] http://crush.readthedocs.io/en/latest/ceph/optimize.html
>>>
>>
> 

-- 
Loïc Dachary, Artisan Logiciel Libre
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux