Re: BLUEFS_SPILLOVER BlueFS spillover detected

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a known issue with RocksDB/BlueFS. Discussed multiple time in this mailing thread...

This should improve starting Nautilus v14.2.12 thanks to the following PRs:

https://github.com/ceph/ceph/pull/33889

https://github.com/ceph/ceph/pull/37091


Please note these PRs don't fix existing spillovers (KV compact or bluefs data migration using ceph-bluestore-tool to be used to fix them)  but rather avoid them from the appearance.


Thanks,

Igor


On 11/14/2020 7:36 AM, Zhenshi Zhou wrote:
Hi,

I have a cluster of 14.2.8.
I created OSDs with dedicated PCIE for wal/db when deployed the cluster.
I set 72G for db and 3G for wal on each OSD.

And now my cluster is in a WARN stats until a long health time.
# ceph health detail
HEALTH_WARN BlueFS spillover detected on 1 OSD(s)
BLUEFS_SPILLOVER BlueFS spillover detected on 1 OSD(s)
      osd.63 spilled over 33 MiB metadata from 'db' device (1.5 GiB used of
72 GiB) to slow device

I lookup on google and find https://tracker.ceph.com/issues/38745
I'm not sure if it's the same issue.
How can I deal with this?

THANKS
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux