Hello, great! What means pool 3? Is it just the pool nr from the poll dump / ls command? Stefan Am 24.05.2017 um 15:48 schrieb Loic Dachary: > Hi Stefan, > > Thanks for volunteering to beta test the crush optimization on a live cluster :-) > > The "crush optimize" command was published today[1] and you should be able to improve your cluster distribution with the following: > > ceph report > report.json > crush optimize --no-forecast --step 64 --crushmap report.json --pool 3 --out-path optimized.crush > ceph osd setcrushmap -i optimized.crush > > Note that it will only perform a first optimization step (moving around 64 PGs). You will need to repeat this command a dozen time to fully optimize the cluster. I assume that's what you will want to control the workload. If you want a minimal change at each step, you can try --step 1 but it will require more than a hundred steps. > > If you're not worried about the load of the cluster, you can optimize it in one go with: > > ceph report > report.json > crush optimize --crushmap report.json --pool 3 --out-path optimized.crush > ceph osd setcrushmap -i optimized.crush > > Cheers > > [1] http://crush.readthedocs.io/en/latest/ceph/optimize.html > -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html