Hi all,
For those looking for the exact commands, I think they are:
* Check value for bluestore_fsck_quick_fix_on_mount:
ceph config get osd bluestore_fsck_quick_fix_on_mount
* Set bluestore_fsck_quick_fix_on_mount to false:
ceph config set osd bluestore_fsck_quick_fix_on_mount false
We upgraded from Octopus to Pacific 16.2.6, and it seems we got "false"
as default value (I don't see any false setting in history of the servers).
Thanks a lot for the heads up Igor!
Cheers
El 28/10/21 a las 17:37, Igor Fedotov escribió:
Dear Ceph users.
On behalf of Ceph's developers community I have to inform about a
recently discovered severe bug which might cause data corruption. The
issue occurs during OMAP format conversion for clusters upgraded to
Pacific, new clusters aren't affected. OMAP format conversion's
trigger is BlueStore repair/quick-fix functionality which might be
invoked either manually via ceph-bluestore-tool or automatically by
OSD if 'bluestore_fsck_quick_fix_on_mount' is set to true.
Both OSD and MDS daemons are known to be suffering from the issue,
potentially other ones, e.g. RGW might be affected as well - the major
symptom is daemon's inability to startup/proceed operating after some
OSDs have been "repaired".
More details on the bug and its status tracking can be found at:
https://tracker.ceph.com/issues/53062
We're currently working on the fix which is expected to be available
in the upcoming v16.2.7 release.
Meanwhile please DO NOT SET bluestore_fsck_quick_fix_on_mount to true
(please immediately switch it to false if already set) and DO NOT RUN
ceph-bluestore-tool's repair/quick-fix commands.
Appologies for all the troubles this could cause.
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx