Hi Stefan, thanks for looking at this. The conversion has happened on 1 host only. Status is: - all daemons on all hosts upgraded - all OSDs on 1 OSD-host were restarted with bluestore_fsck_quick_fix_on_mount = true in its local ceph.conf, these OSDs completed conversion and rebooted, I would assume that the freshly created OMAPs are compacted by default? - unfortunately, the converted SSD-OSDs on this host died - now SSD OSDs on other (un-converted) hosts also start crashing randomly and very badly (not possible to restart due to stuck D-state processes) Does compaction even work properly on upgraded but unconverted OSDs? Best regards, ================= Frank Schilder AIT Risø Campus Bygning 109, rum S14 ________________________________________ From: Stefan Kooman <stefan@xxxxxx> Sent: 06 October 2022 13:27 To: ceph-users@xxxxxxx; Frank Schilder Subject: Re: OSD crashes during upgrade mimic->octopus On 10/6/22 13:06, Frank Schilder wrote: > Hi all, > > we are stuck with a really unpleasant situation and we would appreciate help. Yesterday we completed the ceph deamon upgrade from mimic to octopus all he way through with bluestore_fsck_quick_fix_on_mount = false and started the OSD OMAP conversion today in the morning. Everything went well at the beginning. The conversion went much faster than expected and OSDs came slowly back up. Unfortunately, trouble was only around the corner. That sucks. Not sure how far into the upgrade process you are based on the info in this mail, but just to make sure you are not hit by RocksDB degradation: Have you done an offline compaction of the OSDs after the conversion? We have seen that degraded RocksDB can severely impact the performance. So make sure the OSDs are compacted, i.e.: stop osd processes: systemctl stop ceph-osd.target df|grep "/var/lib/ceph/osd"|awk '{print $6}'|cut -d '-' -f 2|sort -n|xargs -n 1 -P 10 -I OSD ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-OSD compact There can be a ton of other things happening of course. In that case try to gather debug logs. Gr. Stefan _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx