Funny, I wanted to take a look next week how to deal with different
OSD sizes or if somebody already has a fix for that. My workaround is
changing the yaml file for Prometheus as well.
Zitat von "Devin A. Bougie" <devin.bougie@xxxxxxxxxxx>:
Hi, All. We are using cephadm to manage a 19.2.0 cluster on
fully-updated AlmaLinux 9 hosts, and would greatly appreciate help
modifying or overriding the alert rules in ceph_default_alerts.yml.
Is the best option to simply update the
/var/lib/ceph/<cluster_id>/home/ceph_default_alerts.yml file?
In particular, we’d like to either disable the CephPGImbalance alert
or change it to calculate averages per-pool or per-crush_rule
instead of globally as in [1].
We currently have PG autoscaling enabled, and have two separate
crush_rules (one with large spinning disks, one with much smaller
nvme drives). Although I don’t believe it causes any technical
issues with our configuration, our dashboard is full of
CephPGImbalance alerts that would be nice to clean up without having
to create periodic silences.
Any help or suggestions would be greatly appreciated.
Many thanks,
Devin
[1] https://github.com/rook/rook/discussions/13126#discussioncomment-10043490
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx