Hi,
I strongly agree with Joachim, I usually disable the autoscaler in
production environments. But the devs would probably appreciate bug
reports to improve it.
Zitat von Boris Behrens <bb@xxxxxxxxx>:
Hi,
I've just upgraded to our object storages to the latest pacific version
(16.2.14) and the autscaler is acting weird.
On one cluster it just shows nothing:
~# ceph osd pool autoscale-status
~#
On the other clusters it shows this when it is set to warn:
~# ceph health detail
...
[WRN] POOL_TOO_MANY_PGS: 2 pools have too many placement groups
Pool .rgw.buckets.data has 1024 placement groups, should have 1024
Pool device_health_metrics has 1 placement groups, should have 1
Version 16.2.13 seems to act normal.
Is this a known bug?
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx