Re: One mds daemon damaged, filesystem is offline. How to recover?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What does the MDS report in its logs from when it went down?

What size do you get when you run

rados -p cephfs_metadata stat 200.00006048

There's a similar report [3] suggesting to try to force an update on the object info, you could give that a shot:

1. rados -p [cephfs_metadata] setomapval 200.00006048 temporary-key anything
2. ceph pg deep-scrub 2.44
3. Wait for the scrub to finish
4. rados -p [cephfs_metadata] rmomapkey 200.00006048 temporary-key

Above Ceph error messages shows "fs cephfs mds.0 is damaged". My MDS are named a,b and c. Does mds.0 means mds.a?

Here mds.0 means the rank 0, I would assume that you only have one rank, correct (one filesystem with standby MDS daemons)?


[3] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/034580.html


Zitat von Sagara Wijetunga <sagarawmw@xxxxxxxxx>:

Hi Eugen
Thanks for the reply.
Ceph Version:# ceph versionceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf) nautilus (stable)

Can you share

rados list-inconsistent-obj 2.44
# rados list-inconsistent-obj 2.44
{"epoch":6996,"inconsistents":[]}

ceph tell mds.<MDS> damage ls

# ceph tell mds.a damage ls
2021-05-22 13:23:34.135 80bf25c00  0 client.4344312 ms_handle_reset on v2:192.168.1.130:6810/3532878906 2021-05-22 13:23:34.146 80dcc2500  0 client.4344318 ms_handle_reset on v2:192.168.1.130:6810/3532878906
Error EINVAL: MDS not active

The pool size is 3, right?

Yes, pool size is 3.

Above Ceph error messages shows "fs cephfs mds.0 is damaged". My MDS are named a,b and c. Does mds.0 means mds.a?
Since I have 2 MDS not damaged, how do I recover and mount the CephFS?
Btw, our home directories are on CephFS, at the moment, no one can log in.
Sagara


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux