I think you're facing the issue from https://tracker.ceph.com/issues/36268
This has been fixed in Nautilus. Unfortunately I don't see any
fix/workaround for mimic other than OSD redeployment...
On 7/6/2020 10:38 PM, Stefan Kooman wrote:
On 2020-07-06 14:52, Igor Fedotov wrote:
Hi Stefan,
looks like bluefs allocator is unable to provide additional space for
bluefs log. The root cause might be lack of free space and/or high
space fragmentation.
Prior log lines and disk configuration (e.g. ceph-bluestore-tool
bluefs-bdev-sizes) might be helpful for further analysis.
ceph-bluestore-tool bluefs-bdev-sizes --path /var/lib/ceph/osd/ceph-2/
inferring bluefs devices from bluestore path
slot 1 /var/lib/ceph/osd/ceph-2//block
1 : size 0x18ffc00000 : own
0x[1400000~f3ec00000,f40100000~77ff00000,16c0100000~234300000,18f4500000~1900000,18f6000000~1300000,18fe000000~1c00000]
And here some log entries just before the assert that might be helpfull:
-1415> 2020-07-06 16:07:37.880 7f0307fc5c00 5 rocksdb:
[/build/ceph-13.2.8/src/rocksdb/db/db_impl_open.cc:919] [default]
[WriteLevel0TableForRecovery] Level-0 table #31: started
-1415> 2020-07-06 16:07:37.896 7f0307fc5c00 1 bluefs _allocate
failed to allocate 0x400000 on bdev 1, free 0x270000; fallback to bdev 2
-1415> 2020-07-06 16:07:37.896 7f0307fc5c00 -1 bluefs _allocate
failed to allocate 0x on bdev 2, dne
-1415> 2020-07-06 16:07:37.904 7f0307fc5c00 -1
/build/ceph-13.2.8/src/os/bluestore/BlueFS.cc: In function 'int
BlueFS::_flush_and_sync_log(std::unique_lock<std::mutex>&, uint64_t,
uint64_t)' thread 7f0307fc5c00 time 2020-07-06 16:07:37.
899520
Thanks,
Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx