Re: enabling pg_autoscaler on a large production storage?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I did this on my cluster and there was a huge number of pg rebalanced.
I think setting this option to 'on' is a good idea if it's a brand new
cluster.

Dan van der Ster <dan@xxxxxxxxxxxxxx> 于2020年6月16日周二 下午7:07写道:

> Could you share the output of
>
>     ceph osd pool ls detail
>
> ?
>
> This way we can see how the pools are configured and help recommend if
> pg_autoscaler is worth enabling.
>
> Cheers, Dan
>
> On Tue, Jun 16, 2020 at 11:51 AM Boris Behrens <bb@xxxxxxxxx> wrote:
> >
> > I read about the "warm" option and we are already discussing this.
> >
> > I don't know if the pgs needs a tuning. I don't know what the impact
> > is and if there will be any difference if we enable it.
> >
> > The, meanwhile gone, last ceph admin made a ticket, and I am not
> > particularly familiar with ceph. So I need to work on this ticket and
> > I try not to trash our ceph storage :-)
> >
> > Am Di., 16. Juni 2020 um 11:39 Uhr schrieb Dan van der Ster
> > <dan@xxxxxxxxxxxxxx>:
> > >
> > > Hi,
> > >
> > > I agree with "someone" -- it's not a good idea to just naively enable
> > > pg_autoscaler on an existing cluster with lots of data and active
> > > customers.
> > >
> > > If you're curious about this feature, it would be harmless to start
> > > out by enabling it with pg_autoscale_mode = warn on each pool.
> > > This way you can see what the autoscaler would do if it were set to
> > > *on*. Then you can tweak all the target_size_ratio or
> > > target_size_bytes accordingly.
> > >
> > > BTW, do you have some feeling that your 17000 PGs are currently not
> > > correctly proportioned for your cluster?
> > >
> > > -- Dan
> > >
> > > On Tue, Jun 16, 2020 at 11:31 AM Boris Behrens <bb@xxxxxxxxx> wrote:
> > > >
> > > > Hi,
> > > >
> > > > I would like to enable the pg_autoscaler on our nautilus cluster.
> > > > Someone told me that I should be really really careful to NOT have
> > > > customer impact.
> > > >
> > > > Maybe someone can share some experience on this?
> > > >
> > > > The Cluster got 455 OSDs on 19 hosts with ~17000 PGs and ~1petabyte
> > > > raw storage where ~600TB raw is used.
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> >
> >
> > --
> > Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend
> > im groüen Saal.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux