Re: Beta testing crush optimization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 05/24/2017 04:50 PM, Stefan Priebe - Profihost AG wrote:
> Hello,
> 
> great! What means pool 3? Is it just the pool nr from the poll dump / ls
> command?

Yes. In the report you sent me, this is the number of the only pool in the cluster.

> 
> Stefan
> 
> Am 24.05.2017 um 15:48 schrieb Loic Dachary:
>> Hi Stefan,
>>
>> Thanks for volunteering to beta test the crush optimization on a live cluster :-)
>>
>> The "crush optimize" command was published today[1] and you should be able to improve your cluster distribution with the following:
>>
>> ceph report > report.json
>> crush optimize --no-forecast --step 64 --crushmap report.json --pool 3 --out-path optimized.crush
>> ceph osd setcrushmap -i optimized.crush
>>
>> Note that it will only perform a first optimization step (moving around 64 PGs). You will need to repeat this command a dozen time to fully optimize the cluster. I assume that's what you will want to control the workload. If you want a minimal change at each step, you can try --step 1 but it will require more than a hundred steps.
>>
>> If you're not worried about the load of the cluster, you can optimize it in one go with:
>>
>> ceph report > report.json
>> crush optimize --crushmap report.json --pool 3 --out-path optimized.crush
>> ceph osd setcrushmap -i optimized.crush
>>
>> Cheers
>>
>> [1] http://crush.readthedocs.io/en/latest/ceph/optimize.html
>>
> 

-- 
Loïc Dachary, Artisan Logiciel Libre
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux