Re: Cannot mount cephfs after some disaster recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 1, 2016 at 11:51 AM, 10000 <10000@xxxxxxxxxxxxx> wrote:
> Hi,
>     I meet a trouble on mount the cephfs after doing some disaster recovery
> introducing by official
> document(http://docs.ceph.com/docs/master/cephfs/disaster-recovery).
>     Now when I try to mount the cephfs, I get "mount error 5 = Input/output
> error".
>     When run "ceph -s" on clusters, it print like this:
>      cluster 15935dde-1d19-486e-9e1c-67414f9927f6
>      health HEALTH_OK
>      monmap e1: 4 mons at
> {HK-IDC1-10-1-72-151=172.17.17.151:6789/0,HK-IDC1-10-1-72-152=172.17.17.152:6789/0,HK-IDC1-10-1-72-153=172.17.17.153:6789/0,HK-IDC1-10-1-72-160=10.1.72.160:6789/0}
>             election epoch 528, quorum 0,1,2,3
> HK-IDC1-10-1-72-160,HK-IDC1-10-1-72-151,HK-IDC1-10-1-72-152,HK-IDC1-10-1-72-153
>      mdsmap e21038: 1/1/0 up {0=HK-IDC1-10-1-72-160=up:active}
>      osdmap e10536: 108 osds: 108 up, 108 in
>             flags sortbitwise
>       pgmap v424957: 6564 pgs, 3 pools, 3863 GB data, 67643 kobjects
>             8726 GB used, 181 TB / 189 TB avail
>                 6560 active+clean
>                    3 active+clean+scrubbing+deep
>                    1 active+clean+scrubbing
>
>      It seems there should be "1/1/1 up" at mdsmap instead of "1/1/0 up" and
> I really don't know what the last number mean.
>      And there is cephfs if I run "ceph fs ls" which print this:
>
>      name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data
> ]
>
>      I try my best to Google such problem however i get nothing. And I still
> want to know if i can bring the cephfs back. So does any one have ideas?
>
>      Oh, I do the disaster recovery because I get "mdsmap e21012: 0/1/1 up,
> 1 up:standby, 1 damaged" at first. And to bring the fs back to work, I do
> "JOURNAL TRUNCATION", "MDS TABLE WIPES", "MDS MAP RESET". However I think
> there must exist (and most) files that their metadata have been saved at
> OSDs (metadata pool, in RADOS). I just want to get them.

try running command "ceph mds set max_mds 1"

>
>       Thanks.
>
> Yingdi Guo
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux