Re: All older OSDs corrupted after Quincy upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 29/06/2022 17.51, Stefan Kooman wrote:
> 
> What is the setting of "bluestore_fsck_quick_fix_on_mount" in your 
> cluster / OSDs?

I don't have it set explicitly. `ceph config` says:

# ceph config get osd bluestore_fsck_quick_fix_on_mount
false

> What does a "ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-$id 
> get S per_pool_omap" give?

2 on the recently deployed OSD, 1 on the others.

> FWIW: we have had this config setting since some version of Luminous 
> (when this became a better option than stupid). Recently upgraded to 
> Octopus. Redeployed every OSD with these settings in Octopus. No issues. 
> But we have never done a bdev expansion though.

Yeah, it also applies to the new OSDs and those are fine. So if it's
related, there must be some other trigger factor too.

> If it's possible to export objects you might recover data ... but not 
> sure if that data would not be corrupted. With EC it first has to be 
> reassembled. Might be possible, but not an easy task.

Basically, if it's going to take more than two days of work to get the
data back (at least inasmuch as getting a recovery operation started,
it's okay if it takes a while), I think I'd rather just wipe.

-- 
Hector Martin (marcan@xxxxxxxxx)
Public Key: https://mrcn.st/pub
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux