That doesn't really help, the startup log should contain information
why the MDS is going into read-only mode, here's an example from the
mailing list archive:
2020-07-30 18:14:44.835 7f646f33e700 -1 mds.0.159432 unhandled write error
(90) Message too long, force readonly...
2020-07-30 18:14:44.835 7f646f33e700 1 mds.0.cache force file system
read-only
2020-07-30 18:14:44.835 7f646f33e700 0 log_channel(cluster) log [WRN] :
force file system read-only
If necessary, turn on debug logs to provide more details.
Zitat von kreept.sama@xxxxxxxxx:
Hello Eugen, yes i have
Its from object a
...
debug 2023-02-12T07:12:55.469+0000 7f66af51e700 1
mds.gml-okd-cephfs-a asok_command: status {prefix=status}
(starting...)
debug 2023-02-12T07:13:05.453+0000 7f66af51e700 1
mds.gml-okd-cephfs-a asok_command: status {prefix=status}
(starting...)
debug 2023-02-12T07:13:15.478+0000 7f66af51e700 1
mds.gml-okd-cephfs-a asok_command: status {prefix=status}
(starting...)
debug 2023-02-12T07:13:25.477+0000 7f66af51e700 1
mds.gml-okd-cephfs-a asok_command: status {prefix=status}
(starting...)
debug 2023-02-12T07:13:35.445+0000 7f66af51e700 1
mds.gml-okd-cephfs-a asok_command: status {prefix=status}
(starting...)
debug 2023-02-12T07:13:45.487+0000 7f66af51e700 1
mds.gml-okd-cephfs-a asok_command: status {prefix=status}
(starting...)
...
and the same for object b
...
debug 2023-02-12T07:15:41.496+0000 7fdf6281e700 1
mds.gml-okd-cephfs-b asok_command: status {prefix=status}
(starting...)
debug 2023-02-12T07:15:51.479+0000 7fdf6281e700 1
mds.gml-okd-cephfs-b asok_command: status {prefix=status}
(starting...)
debug 2023-02-12T07:16:01.477+0000 7fdf6281e700 1
mds.gml-okd-cephfs-b asok_command: status {prefix=status}
(starting...)
...
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx