Hi,
I usually look for this:
[ceph: root@storage01 /]# ceph daemon osd.0 perf dump bluefs | grep -E
"db_|slow_"
"db_total_bytes": 21470642176,
"db_used_bytes": 179699712,
"slow_total_bytes": 0,
"slow_used_bytes": 0,
If you have spillover I would expect the "slow_bytes" values to be >
0. Is it possible that the OSDs were compacted during/after the
upgrade so the spillover would have been corrected (temporarily)? Do
you know how much spillover you had before? And how big was the db
when you had the warnings?
Regards,
Eugen
Zitat von Peter van Heusden <pvh@xxxxxxxxxxx>:
Hello everyone
I have a Ceph installation where some of the OSDs were misconfigured to use
1GB SSD partitions for rocksdb. This caused a spillover ("BlueFS *spillover*
detected"). I recently upgraded to quincy using cephadm (17.2.5) the
spillover warning vanished. This is
despite bluestore_warn_on_bluefs_spillover still being set to true.
Is there a way to investigate the current state of the DB to see if
spillover is, indeed, still happening?
Thank you,
Peter
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx