Hi Ruben,
it could be nice if you share 'ceph tell osd.N bluefs stats' command
output for these OSDs.
Also you might want to read the following thread
https://www.spinics.net/lists/ceph-users/msg79062.html
which describes using 'ceph-volume lvm migrate' (or its counterpart in
ceph-bluestore-tool) to migrate BlueFS data from slow to DB volume.
The latter might have temporary or permanent impact depending on the
spillover root cause though.
Thanks,
Igor
On 8/26/2024 10:08 AM, Ruben Bosch wrote:
Hi all,
ceph version 18.2.2 (531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)
We are working on marking out OSDs on a host with EC4+2. The OSDs are HDDs.
The OSDs have a separate DB on an NVMe disk. All the operations take ages.
After some time we see BLUEFS_SPILLOVER. Telling the mentioned OSDs to
compact sometimes helps, but not always. The OSDs have plenty space
remaining in the db but the spillover does not disappear.
[WRN] BLUEFS_SPILLOVER: 2 OSD(s) experiencing BlueFS spillover
osd.91 spilled over 141 MiB metadata from 'db' device (15 GiB used of
50 GiB) to slow device
osd.106 spilled over 70 MiB metadata from 'db' device (12 GiB used of
50 GiB) to slow device
Has anyone seen similar behavior before and have they found a workaround or
solution?
Kind regards,
Ruben Bosch
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Igor Fedotov
Ceph Lead Developer
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx