Hi Patrick,
Please be careful resetting the journal. It was not necessary. You can
try to recover the missing inode using cephfs-data-scan [2].
Yes. I did that very reluctantly after trying everything else as a last
resort. But since it only gave me another error, I restored the previous
state. Downgrading to the previous version only came to mind minutes
before Dan wrote that there's a new assertion in 16.2.12 (I didn't
expect a corruption issue to be "fixable" like that).
Thanks for the report. Unfortunately this looks like a false positive.
You're not using snapshots, right?
Or fortunately for me? We have an automated snapshot schedule which
creates snapshots of certain top-level directories daily. Our main
folder is /storage, which had this issue.
In any case, if you can reproduce it again with:
ceph config mds debug_mds 20
ceph config mds debug_ms 1
I'll try that tomorrow and let you know, thanks!
and upload the logs using ceph-post-file [1], that would be helpful to
understand what happened.
After that you can disable the check as Dan pointed out:
ceph config set mds mds_abort_on_newly_corrupt_dentry false
ceph config set mds mds_go_bad_corrupt_dentry false
NOTE FOR OTHER READERS OF THIS MAIL: it is not recommended to blindly
set these configs as the MDS is trying to catch legitimate metadata
corruption.
[1] https://docs.ceph.com/en/quincy/man/8/ceph-post-file/
[2] https://docs.ceph.com/en/latest/cephfs/disaster-recovery-experts/
--
Bauhaus-Universität Weimar
Bauhausstr. 9a, R308
99423 Weimar, Germany
Phone: +49 3643 58 3577
www.webis.de
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx