Bluefs spillover

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

ceph version 18.2.2 (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)

We are working on marking out OSDs on a host with EC4+2. The OSDs are HDDs.
The OSDs have a separate DB on an NVMe disk. All the operations take ages.
After some time we see BLUEFS_SPILLOVER. Telling the mentioned OSDs to
compact sometimes helps, but not always. The OSDs have plenty space
remaining in the db but the spillover does not disappear.

[WRN] BLUEFS_SPILLOVER: 2 OSD(s) experiencing BlueFS spillover
     osd.91 spilled over 141 MiB metadata from 'db' device (15 GiB used of
50 GiB) to slow device
     osd.106 spilled over 70 MiB metadata from 'db' device (12 GiB used of
50 GiB) to slow device

Has anyone seen similar behavior before and have they found a workaround or
solution?

Kind regards,

Ruben Bosch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux