It doesn't really help to create multiple threads for the same issue.
I don't see a reason why the MDS went read-only in your log output
from [1], could you please add the startup log from the MDS in debug
mode so we can actually see why it's going into read-only?
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/SPIEG6YVZ2KSZUY7SLAN2VHIWMPFVI73/
Zitat von kreept.sama@xxxxxxxxx:
Hello. We trying to resolve some issue with ceph. Our openshift
cluster is blocked and we tried do almost all.
Actual state is:
MDS_ALL_DOWN: 1 filesystem is offline
MDS_DAMAGE: 1 mds daemon damaged
FS_DEGRADED: 1 filesystem is degraded
MON_DISK_LOW: mon be is low on available space
RECENT_CRASH: 1 daemons have recently crashed
We try to perform
cephfs-journal-tool --rank=gml-okd-cephfs:all event recover_dentries summary
cephfs-journal-tool --rank=gml-okd-cephfs:all journal reset
cephfs-table-tool gml-okd-cephfs:all reset session
ceph mds repaired 0
ceph config rm mds mds_verify_scatter
ceph config rm mds mds_debug_scatterstat
ceph tell gml-okd-cephfs scrub start / recursive repair force
After these commands, mds rises but an error appears:
MDS_READ_ONLY: 1 MDSs are read only
We also tried to create new fs with new metadata pool, delete and
recreate old fs with same name with old\new metadatapool.
We got rid of the errors, but the Openshift cluster did not want to
work with the old persistence volumes. The pods wrote an error that
they could not find it, while it was present and moreover, this
volume was associated with pvc.
Now we have rolled back the cluster and are trying to remove the mds
error. Any ideas what to try?
Thanks
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx