On 1/20/20 4:17 PM, Anton Aleksandrov wrote: > Hello community, > > We have very small ceph cluster of just 12 OSDs (1 per small server), 3 > MDS (one active) and 1 cephFS client. > Which version of Ceph? $ ceph versions > CephFS client is running Centos7, kernel 3.10.0-957.27.2.el7.x86_64. > > We created 3 MDS servers for redundancy and we mount our filesystem by > connecting to 3 of them. But what we have noticed, is that if we power > off first one in the list in mount option - then everything freezes on a > client and in most cases we have to bring MDS back and do force-reset on > a client. > > What are we doing wrong? Shouldn't client automatically switch to any > other MDS server in a microsecond and keep running? And why it affects > only first one? - We tried powering off second or third MDS host and > that had no effect on a client-side. > It should work, but can you explain how long you waited? What does 'ceph -s' show you after the MDS failed? Does another MDS take over the work and become the active MDS? Wido > Should we have local MDS on the client and just connect to it? Or should > there be some other logic or setting? - to be honest, we have used > /ceph-//deploy/ tool and haven't did much of configuration.. > > Thank you for understanding and help. :) > > Anton Aleksandrov. > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com