Hi Stefan, super thanks! I found a quick-fix command in the help output: # ceph-bluestore-tool -h [...] Positional options: --command arg fsck, repair, quick-fix, bluefs-export, bluefs-bdev-sizes, bluefs-bdev-expand, bluefs-bdev-new-db, bluefs-bdev-new-wal, bluefs-bdev-migrate, show-label, set-label-key, rm-label-key, prime-osd-dir, bluefs-log-dump, free-dump, free-score, bluefs-stats but its not documented in https://docs.ceph.com/en/octopus/man/8/ceph-bluestore-tool/. I guess I will stick with the tested command "repair". Nothing I found mentions what exactly is executed on start-up with bluestore_fsck_quick_fix_on_mount = true. Thanks for your quick answer! Best regards, ================= Frank Schilder AIT Risø Campus Bygning 109, rum S14 ________________________________________ From: Stefan Kooman <stefan@xxxxxx> Sent: 07 October 2022 09:07:37 To: Frank Schilder; Igor Fedotov; ceph-users@xxxxxxx Subject: Re: OSD crashes during upgrade mimic->octopus On 10/7/22 09:03, Frank Schilder wrote: > Hi Igor and Stefan, > > thanks a lot for your help! Our cluster is almost finished with recovery and I would like to switch to off-line conversion of the SSD OSDs. In one of Stefan's I coud find the command for manual compaction: > > ceph-kvstore-tool bluestore-kv "/var/lib/ceph/osd/ceph-${OSD_ID}" compact > > Unfortunately, I can't find the command for performing the omap conversion. It is not mentioned here https://docs.ceph.com/en/quincy/releases/octopus/#upgrading-from-mimic-or-nautilus even though it does mention the option to skip conversion in step 5. How to continue with an off-line conversion is not mentioned. I know it has been posted before, but I seem unable to find it on this list. If someone could send me the command, I would be most grateful. for osd in `ls /var/lib/ceph/osd/`; do ceph-bluestore-tool repair --path /var/lib/ceph/osd/$osd;done That's what I use. Gr. Stefan _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx