Hi Peter, On Thursday, January 12th, 2023 at 15:12, Peter van Heusden <pvh@xxxxxxxxxxx> wrote: > I have a Ceph installation where some of the OSDs were misconfigured to use > 1GB SSD partitions for rocksdb. This caused a spillover ("BlueFS spillover > detected"). I recently upgraded to quincy using cephadm (17.2.5) the > spillover warning vanished. This is > despite bluestore_warn_on_bluefs_spillover still being set to true. I noticed this on Pacific as well, and I think it's due to this commit: https://github.com/ceph/ceph/commit/d17cd6604b4031ca997deddc5440248aff451269. It removes the logic that would normally update the spillover health check, so it never triggers anymore. As others mentioned, you can get the relevant metrics from Prometheus and setup alerts there instead. But it does make me wonder how many people might have spillover in their clusters and not even realize it, since there's no warning by default. Cheers, -- Ben _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx