Re: OSDs crush - Since Pacific

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Wissem,

given the log output it looks like suicide timeout has been fired. From my experience this is often observed when DB performance is degraded after bulk removals. And offline compaction should provide some relief. At least temporarily... But if deletes are ongoing (e.g. due to cluster rebuilding) another compaction round might be needed.


Thanks,

Igor

On 8/31/2022 2:37 PM, Wissem MIMOUNA wrote:
Hi Igor ,
I attached the full log file found beside the crash report on the concerned ceph osd server.
Thank you for your time J
Hi Wissem,
sharing OSD log snippet preceding the crash (e.g. prior 20K lines) could
be helpful and hopefully will provide more insigh - there might be some
errors/assertion details and/or other artefacts...
Thanks,
Igor

--
Igor Fedotov
Ceph Lead Developer

Looking for help with your Ceph cluster? Contact us athttps://croit.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:https://croit.io  | YouTube:https://goo.gl/PGE1Bx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux