We have similar problems in our clusters and sometimes we do manual reweight. Also we noticed smaller PG's (more of them in pool) help with balancing too.
Arvydas
On Dec 30, 2016 21:01, "Shinobu Kinjo" <skinjo@xxxxxxxxxx> wrote:
The best practice to reweight OSDs is to run
test-reweight-by-utilization which is dry-run of reweighting OSDs
before running reweight-by-utilization.
On Sat, Dec 31, 2016 at 3:05 AM, Brian Andrus
<brian.andrus@xxxxxxxxxxxxx> wrote:
> We have a set it and forget it cronjob setup once an hour to keep things a
> bit more balanced.
>
> 1 * * * * /bin/bash /home/briana/reweight_osd.sh 2>&1 | /usr/bin/logger -t
> ceph_reweight
>
> The script checks and makes sure cluster health is OK and no other
> rebalancing is going on. It will also check the reported STDDEV from `ceph
> osd df` and if outside acceptable ranges executes a gentle reweight.
>
> ceph osd reweight-by-utilization 103 .015 10
>
> It's definitely an "over time" kind of thing, but after a week we are
> already seeing pretty good results. Pending OSD reboots, a few months from
> now our cluster should be seeing quite a bit less difference in utilization.
>
> The three parameters after the reweight-by-utilization are not well
> documented, but they are
>
> 103 - Select OSDs that are 3% above the average (default is 120 but we want
> a larger pool of OSDs to choose from to get an eventual tighter tolerance)
> .010 - don't reweight any OSD more than this increment (keeps the impact
> low)
> 10 - number of OSDs to select (to keep impact manageable)
>
> Hope that helps.
>
> On Fri, Dec 30, 2016 at 2:27 AM, Kees Meijs <kees@xxxxxxxx> wrote:
>>
>> Thanks, I'll try a manual reweight at first.
>>
>> Have a happy new year's eve (yes, I know it's a day early)!
>>
>> Regards,
>> Kees
>>
>> On 30-12-16 11:17, Wido den Hollander wrote:
>> > For this reason you can do a OSD reweight by running the 'ceph osd
>> > reweight-by-utilization' command or do it manually with 'ceph osd reweight X
>> > 0-1'
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>
>
>
>
> --
> Brian Andrus
> Cloud Systems Engineer
> DreamHost, LLC
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com