Re: Stop Rebalancing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For the moment, Dan's workaround sounds good to me, but I'd like to
understand how we got here, in terms of the decisions that were made
by the autoscaler.
We have a config option called "target_max_misplaced_ratio" (default
value is 0.05), which is supposed to limit the number of misplaced
objects in the cluster to 5% of the total. Ray, in your case, does
that seem to have worked, given that you have ~1.3 billion misplaced
objects?

In any case, let's use https://tracker.ceph.com/issues/55303 to
capture some more debug data that can help us understand the actions
of the autoscaler. To start with, it would be helpful if you could
attach the cluster and audit logs, output of ceph -s, ceph df along
with the output of ceph osd pool autoscale-status and ceph osd pool ls
detail. Junior (Kamoltat), is there anything else that will be useful
to capture to get to the bottom of this?

Just for future reference, 16.2.8 and quincy, will include a
"noautoscale" cluster-wide flag, which can be used to disable auto
scaling across pools, during maintenance periods.

Thanks,
Neha


On Wed, Apr 13, 2022 at 1:58 PM Ray Cunningham
<ray.cunningham@xxxxxxxxxxxxxx> wrote:
>
> We've done that, I'll update with what happens overnight. Thanks everyone!
>
>
> Thank you,
>
> Ray
>
> ________________________________
> From: Anthony D'Atri <anthony.datri@xxxxxxxxx>
> Sent: Wednesday, April 13, 2022 4:49 PM
> To: Ceph Users <ceph-users@xxxxxxx>
> Subject:  Re: Stop Rebalancing
>
>
>
> > In any case, isn't this still the best approach to make all PGs go
> > active+clean ASAP in this scenario?
> >
> > 1. turn off the autoscaler (for those pools, or fully)
> > 2. for any pool with pg_num_target or pgp_num_target values, get the
> > current pgp_num X and use it to `ceph osd pool set <pool> pg_num X`.
> >
> > Can someone confirm that or recommend something different?
>
> FWIW that’s what I would do.
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux