Hi Jan, IIUC the attached log is for ceph-kvstore-tool, right? Can you please share full OSD startup log as well? Thanks, Igor On 12/27/2023 4:30 PM, Jan Marek wrote:
Hello, I've problem: my ceph cluster (3x mon nodes, 6x osd nodes, every osd node have 12 rotational disk and one NVMe device for bluestore DB). CEPH is installed by ceph orchestrator and have bluefs storage on osd. I've started process upgrade from version 17.2.6 to 18.2.1 by invocating: ceph orch upgrade start --ceph-version 18.2.1 After upgrade of mon and mgr processes orchestrator tried to upgrade the first OSD node, but they are falling down. I've stop the process of upgrade, but I have 1 osd node completely down. After upgrade I've got some error messages and I've found /var/lib/ceph/crashxxxx directories, I attach to this message files, which I've found here. Please, can you advice, what now I can do? It seems, that rocksdb is even non-compatible or corrupted :-( Thanks in advance. Sincerely Jan Marek _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
-- Igor Fedotov Ceph Lead Developer Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH, Freseniusstr. 31h, 81247 Munich CEO: Martin Verges - VAT-ID: DE310638492 Com. register: Amtsgericht Munich HRB 231263 Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx