Re: Autoscaler problems in pacific

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Found the bug for the TOO_MANY_PGS: https://tracker.ceph.com/issues/62986
But I am still not sure, why I don't have any output on that one cluster.

Am Mi., 4. Okt. 2023 um 14:08 Uhr schrieb Boris Behrens <bb@xxxxxxxxx>:

> Hi,
> I've just upgraded to our object storages to the latest pacific version
> (16.2.14) and the autscaler is acting weird.
> On one cluster it just shows nothing:
> ~# ceph osd pool autoscale-status
> ~#
>
> On the other clusters it shows this when it is set to warn:
> ~# ceph health detail
> ...
> [WRN] POOL_TOO_MANY_PGS: 2 pools have too many placement groups
>     Pool .rgw.buckets.data has 1024 placement groups, should have 1024
>     Pool device_health_metrics has 1 placement groups, should have 1
>
> Version 16.2.13 seems to act normal.
> Is this a known bug?
> --
> Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
> groüen Saal.
>


-- 
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux