Hi, Now I am using cephfs whith mds. I mounted cephfs through ceph-fuse. It worked well until yesterday when I add some new osds and hosst to the cluster. After that I can’t user cephfs any more . It shows that when I check it whith “ceph ?s”: cluster e7545c1d-f452-4893-8ba2-29038fc8a767 health HEALTH_WARN 1 pgs down; 2 pgs incomplete; 2 pgs stuck inactive; 2 pgs stuck unclean; 15 requests are blocked > 32 sec; mds cluster is degraded; clock skew detected on mon.c, mon.d, mon.e monmap e1: 5 mons at {a=30.10.0.6:6789/0,b=30.10.0.7:6789/0,c=30.10.0.8:6789/0,d=30.10.0.9:6789/0,e=30.10.0.10:6789/0}, election epoch 294, quorum 0,1,2,3,4 a,b,c,d,e mdsmap e178: 1/1/1 up {0=a=up:rejoin} osdmap e10551: 34 osds: 34 up, 34 in pgmap v1748469: 17216 pgs, 7 pools, 340 GB data, 104 kobjects 997 GB used, 99774 GB / 100772 GB avail 1 down+incomplete 17214 active+clean 1 incomplete And “ceph health detail” it shows: mds cluster is degraded mds.a at 30.10.0.6:6807/29136 rank 0 is rejoining Can you help me fix this problem or have any idea to get the data stored in the cephfs back? Regards, Fengtiang, Wang |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com