I also ran into this with v16. In my case, trying to run a repair totally exhausted the RAM on the box, and was unable to complete. After removing/recreating the OSD, I did notice that it has a drastically smaller OMAP size than the other OSDs. I don’t know if that actually means anything, but just wanted to mention it in case it does. ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 14 hdd 10.91409 1.00000 11 TiB 3.3 TiB 3.2 TiB 4.6 MiB 5.4 GiB 7.7 TiB 29.81 1.02 34 up osd.14 16 hdd 10.91409 1.00000 11 TiB 3.3 TiB 3.3 TiB 20 KiB 9.4 GiB 7.6 TiB 30.03 1.03 35 up osd.16 ~ Sean On Sep 20, 2021 at 8:27:39 AM, Paul Mezzanini <pfmeec@xxxxxxx> wrote: > I got the exact same error on one of my OSDs when upgrading to 16. I > used it as an exercise on trying to fix a corrupt rocksdb. A spent a few > days of poking with no success. I got mostly tool crashes like you are > seeing with no forward progress. > > I eventually just gave up, purged the OSD, did a smart long test on the > drive to be sure and then threw it back in the mix. Been HEALTH OK for > a week now after it finished refilling the drive. > > > On 9/19/21 10:47 AM, Andrej Filipcic wrote: > > 2021-09-19T15:47:13.610+0200 7f8bc1f0e700 2 rocksdb: > > [db_impl/db_impl_compaction_flush.cc:2344] Waiting after background > > compaction error: Corruption: block checksum mismatch: expected > > 2427092066, got 4051549320 in db/251935.sst offset 18414386 size > > 4032, Accumulated background error counts: 1 > > 2021-09-19T15:47:13.636+0200 7f8bbacf1700 -1 rocksdb: submit_common > > error: Corruption: block checksum mismatch: expected 2427092066, got > > 4051549320 in db/251935.sst offset 18414386 size 4032 code = 2 > > Rocksdb transaction: > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx