Hello,
Can you confirm that the bug only affects pacific and not octopus ?
Thanks.
F.
Le 29/10/2021 à 16:39, Neha Ojha a écrit :
On Thu, Oct 28, 2021 at 8:11 AM Igor Fedotov <igor.fedotov@xxxxxxxx> wrote:
On 10/28/2021 12:36 AM, mgrzybowski wrote:
Hi Igor
I'm very happy that You ware able to reproduce and find the bug.
Nice one !
In my opinion at the moment first priority should be to warn other users
in the official upgrade docs:
https://docs.ceph.com/en/latest/releases/pacific/#upgrading-from-octopus-or-nautilus
.
This has been escalated to Ceph dev's community, hopefully to be done
shortly.
We have added a warning in our docs
https://ceph--43706.org.readthedocs.build/en/43706/releases/pacific/#upgrading-from-octopus-or-nautilus.
Thanks,
Neha
Please also note the tracker: https://tracker.ceph.com/issues/53062
and the fix: https://github.com/ceph/ceph/pull/43687
In my particular case ( i have home storage server based on cephfs and
bunch of random hdd's - SMRs too :( )
i restarted osds one at the time after all RADOS objects were
repaired. Unfortuantely four disks
due to recovery strains showed bad sectors, so i have small number of
unfound objects.
Bad disks were removed one by one. Now i'm waiting for backfill, then
scrubs.
Make crashed osd working again could be nice but should not be
neccessery.
What about some kind of export and impoort of PGs. Could this work on
crashed OSDs with failed omap format upgrade?
I can't say for sure what would be the results - export/import should
probably work but omaps in the restored PGs would be still broken.
Highly likely OSDs (and other daemons) would stuck on that invalid
data... Converting ill-formated omaps back to their regular form (either
new or legacy one) looks more straighforward and predictable task...
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx