Re: Cannot mount cephfs after some disaster recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/03/2016 18:14, John Spray wrote:

>> And what is the meaning of the first and the second number below?
>>
>>     mdsmap e21038: 1/1/0 up {0=HK-IDC1-10-1-72-160=up:active}
>>                    ^ ^
> 
> Your whitespace got lost here I think, but I guess you're talking
> about the 1/1 part.

Yes indeed.

> The shorthand MDS status is up/in/max_mds
> (https://github.com/ceph/ceph/blob/master/src/mds/MDSMap.cc#L248)
> 
> up: how many daemons are up and holding a rank (they may be active or
> replaying, etc)
> in: how many ranks exist in the MDS cluster
> max_mds: if there are this many MDSs already, new daemons will be made
> standbys instead of having ranks created for them.
> 
> On single-active-daemon systems, this is really just going to be 1/1/1
> or 0/1/1 for whether you have an up MDS or not.

Ok thx John for the explanations.


-- 
François Lafont
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux