I had a filesystem rank get damaged when the MDS had an error writing the log to the OSD. Is damage expected when a log write fails? According to log messages, an OSD write failed because the MDS attempted to write a bigger chunk than the OSD's maximum write size. I can probably figure out why that happened and fix it, but OSD write failures can happen for lots of reasons, and I would have expected the MDS just to discard the recent filesystem updates, issue a log message, and keep going. The user had presumably not been told those updates were committed. And how do I repair this now? Is this a job for cephfs-journal-tool event recover_dentries cephfs-journal-tool journal reset ? This is Jewel. -- Bryan Henderson San Jose, California _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com