Re: mds server(s) crashed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I also encounter a problem,standby mds can not be altered to active when active mds service stopped,which bother me for serval days.Maybe MDS cluster can solve those problem,but ceph team haven't released this feature.


yangyongpeng@xxxxxxxxxxxxx
 
From: Yan, Zheng
Date: 2015-08-13 10:21
To: Bob Ababurko
CC: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] mds server(s) crashed
On Thu, Aug 13, 2015 at 7:05 AM, Bob Ababurko <bob@xxxxxxxxxxxx> wrote:
>
> If I am using a more recent client(kernel OR ceph-fuse), should I still be
> worried about the MDS's crashing?  I have added RAM to my MDS hosts and its
> my understanding this will also help mitigate any issues, in addition to
> setting mds_bal_frag = true.  Not having used cephfs before, do I always
> need to worry about my MDS servers crashing all the time, thus the need for
> setting mds_reconnect_timeout to 0?  This is not ideal for us nor is the
> idea of clients not able to access their mounts after a MDS recovery.
>
 
It's unlikely this issue will happen again. But I can't  guarantee no
other issue.
 
no need to set mds_reconnect_timeout to 0.
 
 
> I am actually looking for the most stable way to implement cephfs at this
> point.   My cephfs cluster contains millions of small files, so many inodes
> if that needs to be taken into account.  Perhaps I should only be using one
> MDS node for stability at this point?  Is this the best way forward to get a
> handle on stability?  I'm also curious if I should I set my mds cache size
> to a number greater than files I have in the cephfs cluster?  If you can
> give some key points to configure cephfs to get the best stability and if
> possible, availability.....this would be helpful to me.
 
One active MDS is the most stable setup. Adding a few standby MDS
should not hurt stability.
 
You can't set  mds cache size to a number greater than files in the
fs, it requires lots of memory.
 
 
Yan, Zheng
 
>
> thanks again for the help.
>
> thanks,
> Bob
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux