Re: ceph balancer: further optimizations?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 20, 2018 at 10:19 PM Stefan Priebe - Profihost AG
<s.priebe@xxxxxxxxxxxx> wrote:
>
>
> Am 20.08.2018 um 21:52 schrieb Sage Weil:
> > On Mon, 20 Aug 2018, Stefan Priebe - Profihost AG wrote:
> >> Hello,
> >>
> >> since loic seems to have left ceph development and his wunderful crush
> >> optimization tool isn'T working anymore i'm trying to get a good
> >> distribution with the ceph balancer.
> >>
> >> Sadly it does not work as good as i want.
> >>
> >> # ceph osd df | sort -k8
> >>
> >> show 75 to 83% Usage which is 8% difference which is too much for me.
> >> I'm optimization by bytes.
> >>
> >> # ceph balancer eval
> >> current cluster score 0.005420 (lower is better)
> >>
> >> # ceph balancer eval $OPT_NAME
> >> plan spriebe_2018-08-20_19:36 final score 0.005456 (lower is better)
> >>
> >> I'm unable to optimize further ;-( Is there any chance to optimize
> >> further even in case of more rebelancing?
> >
> > The scoring that the balancer module is doing is currently a hybrid of pg
> > count, bytes, and object count.  Picking a single metric might help a bit
> > (as those 3 things are not always perfectly aligned).
>
> Hi,
>
> ok i found a bug in the balancer code which seems to be present in all
> releases.
>
>  861                     best_ws = next_ws
>  862                     best_ow = next_ow
>
>
> should be:
>
>  861                     best_ws = copy.deepcopy(next_ws)
>  862                     best_ow = copy.deepcopy(next_ow)
>
> otherwise it does not use the best but the last.

Interesting... does that change improve things?

Also, if most of your data is in one pool you can try ceph balancer
eval <pool-name>

-- dan

>
> I'm also using this one:
> https://github.com/ceph/ceph/pull/20665/commits/c161a74ad6cf006cd9b33b40fd7705b67c170615
>
> to optimize by bytes only.
>
> Greets,
> Stefan
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux