Re: ceph reweight-by-utilization and increasing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Tue, 20 Sep 2016 14:40:25 +0200 Stefan Priebe - Profihost AG wrote:

> Hi Christian,
> 
> Am 20.09.2016 um 13:54 schrieb Christian Balzer:
> > This and the non-permanence of reweight is why I use CRUSH reweight (a
> > more distinct naming would be VERY helpful, too) and do it manually, which
> > tends to beat all the automated approaches so far.
> 
> so you do it really by hand and use ceph osd crush set weight?
>
Indeed.

Mind, my clusters aren't that big.
Also (as I described here before) by moving the worst offenders up and
down respectively while trying to keep the per host weight as close
to identical to the original value as possible, one winds up with only
about half of the OSDs that need tweaking.

Also as mentioned before, both approaches penultimately are band-aids to a
problem that needs something far more integrated and smarter, short from
re-visiting the CRUSH algorithm.

Because with plain reweights you will loose the adjustment when the OSD
gets set out for some reason. 
While this is not the case with CRUSH reweights, loosing an OSD
(re-balancing ensues) may still cause some OSDs to get much more PGs than
they would have otherwise (with original weights).

In short, CRUSH reweight can and will give you a nicely balanced cluster
during normal operations, but if you're running things being close to full
(not being able to sustain an OSD or node loss and the resulting
re-shuffling), it may not save you.

Christian

> Greets,
> Stefan
> 
> >  On Tue, 20 Sep 2016 13:49:50 +0200 Dan van der Ster wrote:
> > 
> >> Hi Stefan,
> >>
> >> What's the current reweight value for osd.110? It cannot be increased above 1.
> >>
> >> Cheers, Dan
> >>
> >>
> >>
> >> On Tue, Sep 20, 2016 at 12:13 PM, Stefan Priebe - Profihost AG
> >> <s.priebe@xxxxxxxxxxxx> wrote:
> >>> Hi,
> >>>
> >>> while using ceph hammer i saw in the doc of ceph reweight-by-utilization
> >>> that there is a --no-increasing flag. I do not use it but never saw an
> >>> increased weight value even some of my osds are really empty.
> >>>
> >>> Example:
> >>> 821G  549G  273G  67% /var/lib/ceph/osd/ceph-110
> >>>
> >>> vs.
> >>>
> >>> 821G  767G   54G  94% /var/lib/ceph/osd/ceph-13
> >>>
> >>> I would expect that ceph reweight-by-utilization increases osd.110
> >>> weight value but instead it still lowers other osds.
> >>>
> >>> Greets,
> >>> Stefan
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users@xxxxxxxxxxxxxx
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> > 
> > 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux