Re: upmap balancer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I would upgrade, configure the balancer correctly, then wait a bit for
it to smooth things out.
Afterwards you can reweight back to 1.0.
-- dan

On Mon, Mar 16, 2020 at 4:19 PM Thomas Schneider <74cmonty@xxxxxxxxx> wrote:
>
> Hi Dan,
>
> indeed I'm trying to balance the PGs.
>
> In order to ensure Ceph cluster operations I used OSD reweight, means
> some specific OSDs are not with reweight 0.8 and 0.9 respectively.
>
> Question:
> Can I upgrade to Ceph 14.2.8 w/o resetting the weight to 1.0?
> Or should I cleanup this reweight first, the upgrade to 14.2.8 and
> enable balancer as last?
>
>
> Regards
> Thomas
>
> Am 16.03.2020 um 16:10 schrieb Dan van der Ster:
> > Hi Thomas,
> > I lost track of your issue. Are you just trying to balance the PGs ?
> > 14.2.8 has big improvements -- check the release notes / blog post
> > about setting the upmap_max_deviations down to 2 or 1.
> > -- Dan
> >
> > On Mon, Mar 16, 2020 at 4:00 PM Thomas Schneider <74cmonty@xxxxxxxxx> wrote:
> >> Hi Dan,
> >>
> >> I have opened this this bug report for balancer not working as expected.
> >> https://tracker.ceph.com/issues/43586
> >>
> >> Then I thought it could make sense to balance the cluster manually by
> >> means of moving PGs from a heavily loaded OSD to another.
> >>
> >> I found your slides "Luminous: pg upmap (dev)
> >> <https://indico.cern.ch/event/669931/contributions/2742401/attachments/1533434/2401109/upmap.pdf>",
> >> but I didn't fully understand.
> >>
> >> Could you please advise how to move PGs manually?
> >>
> >> Regards
> >> Thomas
> >>
> >> Am 23.01.2020 um 16:05 schrieb Dan van der Ster:
> >>> Hi Frank,
> >>>
> >>> No, it is basically balancing the num_pgs per TB (per osd).
> >>>
> >>> Cheers, Dan
> >>>
> >>>
> >>> On Thu, Jan 23, 2020 at 3:53 PM Frank R <frankaritchie@xxxxxxxxx
> >>> <mailto:frankaritchie@xxxxxxxxx>> wrote:
> >>>
> >>>     Hi all,
> >>>
> >>>     Does using the Upmap balancer require that all OSDs be the same size
> >>>     (per device class)?
> >>>
> >>>     thx
> >>>     Frank
> >>>     _______________________________________________
> >>>     ceph-users mailing list -- ceph-users@xxxxxxx
> >>>     <mailto:ceph-users@xxxxxxx>
> >>>     To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>>     <mailto:ceph-users-leave@xxxxxxx>
> >>>
> >>>
> >>> _______________________________________________
> >>> ceph-users mailing list -- ceph-users@xxxxxxx
> >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux