Re: Nautilus - PG Autoscaler Gobal vs Pool Setting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All,

In looking at the options for setting the default pg autoscale option, I
notice that there is a global option setting and a per-pool option
setting.  It seems that the options at the pool level are off, warn, and
on.  The same, I assume for the global setting.

Is there a way to get rid of the per-pool setting and set the pool to honor
the global setting?  I think I'm looking for 'off, warn, on, or global'.
 It seems that once the per-pool option is set for all of one's pools, the
global value is irrelevant.  This also implies that in a circumstance where
one would want to temporarily suspend autoscaling it would be required to
modify the setting for each pool and then to modify it back afterward.

Thoughts?

Thanks

-Dave

--
Dave Hall
Binghamton University
kdhall@xxxxxxxxxxxxxx


On Mon, Mar 29, 2021 at 1:44 PM Anthony D'Atri <anthony.datri@xxxxxxxxx>
wrote:

> Yes the PG autoscalar has a way of reducing PG count way too far.  There’s
> a claim that it’s better in Pacific, but I tend to recommend disabling it
> and calculating / setting pg_num manually.
>
> > On Mar 29, 2021, at 9:06 AM, Dave Hall <kdhall@xxxxxxxxxxxxxx> wrote:
> >
> > Eugen,
> >
> > I didn't really think my cluster was eating itself, but I also didn't
> want
> > to be in denial.
> >
> > Regarding the autoscaler, I really thought that it only went up - I
> didn't
> > expect that it would decrease the number of PGs.  Plus, I thought I had
> it
> > turned off.  I see now that it's off globally but enabled for this
> > particular pool.  Also, I see that the target PG count is lower than the
> > current.
> >
> > I guess you learn something new every day.
> >
> > -Dave
> >
> > --
> > Dave Hall
> > Binghamton University
> > kdhall@xxxxxxxxxxxxxx
> > 607-760-2328 (Cell)
> > 607-777-4641 (Office)
> >
> >
> > On Mon, Mar 29, 2021 at 7:52 AM Eugen Block <eblock@xxxxxx> wrote:
> >
> >> Hi,
> >>
> >> that sounds like the pg_autoscaler is doing its work. Check with:
> >>
> >> ceph osd pool autoscale-status
> >>
> >> I don't think ceph is eating itself or that you're losing data. ;-)
> >>
> >>
> >> Zitat von Dave Hall <kdhall@xxxxxxxxxxxxxx>:
> >>
> >>> Hello,
> >>>
> >>> About 3 weeks ago I added a node and increased the number of OSDs in my
> >>> cluster from 24 to 32, and then marked one old OSD down because it was
> >>> frequently crashing.  .
> >>>
> >>> After adding the new OSDs the PG count jumped fairly dramatically, but
> >> ever
> >>> since, amidst a continuous low level of rebalancing, the number of PGs
> >> has
> >>> gradually decreased to less by 25% from it's max value.  Although I
> don't
> >>> have specific notes, my perception is that the current number of PGs is
> >>> actually lower than it was before I added OSDs.
> >>>
> >>> So what's going on here?  It is possible to imagine that my cluster is
> >>> slowly eating itself, and that I'm about to lose 200TB of data. It's
> also
> >>> possible to imagine that this is all due to the gradual optimization of
> >> the
> >>> pools.
> >>>
> >>> Note that the primary pool is an EC 8 + 2 containing about 124TB.
> >>>
> >>> Thanks.
> >>>
> >>> -Dave
> >>>
> >>> --
> >>> Dave Hall
> >>> Binghamton University
> >>> kdhall@xxxxxxxxxxxxxx
> >>> _______________________________________________
> >>> ceph-users mailing list -- ceph-users@xxxxxxx
> >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux