Re: Nautilus - PG count decreasing after adding OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Eugen,

I didn't really think my cluster was eating itself, but I also didn't want
to be in denial.

Regarding the autoscaler, I really thought that it only went up - I didn't
expect that it would decrease the number of PGs.  Plus, I thought I had it
turned off.  I see now that it's off globally but enabled for this
particular pool.  Also, I see that the target PG count is lower than the
current.

I guess you learn something new every day.

-Dave

--
Dave Hall
Binghamton University
kdhall@xxxxxxxxxxxxxx
607-760-2328 (Cell)
607-777-4641 (Office)


On Mon, Mar 29, 2021 at 7:52 AM Eugen Block <eblock@xxxxxx> wrote:

> Hi,
>
> that sounds like the pg_autoscaler is doing its work. Check with:
>
> ceph osd pool autoscale-status
>
> I don't think ceph is eating itself or that you're losing data. ;-)
>
>
> Zitat von Dave Hall <kdhall@xxxxxxxxxxxxxx>:
>
> > Hello,
> >
> > About 3 weeks ago I added a node and increased the number of OSDs in my
> > cluster from 24 to 32, and then marked one old OSD down because it was
> > frequently crashing.  .
> >
> > After adding the new OSDs the PG count jumped fairly dramatically, but
> ever
> > since, amidst a continuous low level of rebalancing, the number of PGs
> has
> > gradually decreased to less by 25% from it's max value.  Although I don't
> > have specific notes, my perception is that the current number of PGs is
> > actually lower than it was before I added OSDs.
> >
> > So what's going on here?  It is possible to imagine that my cluster is
> > slowly eating itself, and that I'm about to lose 200TB of data. It's also
> > possible to imagine that this is all due to the gradual optimization of
> the
> > pools.
> >
> > Note that the primary pool is an EC 8 + 2 containing about 124TB.
> >
> > Thanks.
> >
> > -Dave
> >
> > --
> > Dave Hall
> > Binghamton University
> > kdhall@xxxxxxxxxxxxxx
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux