Re: Cannot mount cephfs after some disaster recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 1, 2016 at 11:41 AM, Francois Lafont <flafdivers@xxxxxxx> wrote:
> Hi,
>
> On 01/03/2016 10:32, John Spray wrote:
>
>> As Zheng has said, that last number is the "max_mds" setting.
>
> And what is the meaning of the first and the second number below?
>
>     mdsmap e21038: 1/1/0 up {0=HK-IDC1-10-1-72-160=up:active}
>                    ^ ^

Your whitespace got lost here I think, but I guess you're talking
about the 1/1 part.

The shorthand MDS status is up/in/max_mds
(https://github.com/ceph/ceph/blob/master/src/mds/MDSMap.cc#L248)

up: how many daemons are up and holding a rank (they may be active or
replaying, etc)
in: how many ranks exist in the MDS cluster
max_mds: if there are this many MDSs already, new daemons will be made
standbys instead of having ranks created for them.

On single-active-daemon systems, this is really just going to be 1/1/1
or 0/1/1 for whether you have an up MDS or not.

John

>
> --
> François Lafont
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux