Yes, another MDS takes it over and even comes back, but client does not
always "unfreeze".
Weird, i see some different versions..
ceph versions
{
"mon": {
"ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4)
mimic (stable)": 2,
"ceph version 13.2.8 (5579a94fafbc1f9cc913a0f5d362953a5d9c3ae0)
mimic (stable)": 1
},
"mgr": {
"ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4)
mimic (stable)": 2,
"ceph version 13.2.8 (5579a94fafbc1f9cc913a0f5d362953a5d9c3ae0)
mimic (stable)": 1
},
"osd": {
"ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4)
mimic (stable)": 24
},
"mds": {
"ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4)
mimic (stable)": 1
},
"overall": {
"ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4)
mimic (stable)": 29,
"ceph version 13.2.8 (5579a94fafbc1f9cc913a0f5d362953a5d9c3ae0)
mimic (stable)": 2
}
}
Anton.
On 1/20/2020 5:20 PM, Wido den Hollander wrote:
On 1/20/20 4:17 PM, Anton Aleksandrov wrote:
Hello community,
We have very small ceph cluster of just 12 OSDs (1 per small server), 3
MDS (one active) and 1 cephFS client.
Which version of Ceph?
$ ceph versions
CephFS client is running Centos7, kernel 3.10.0-957.27.2.el7.x86_64.
We created 3 MDS servers for redundancy and we mount our filesystem by
connecting to 3 of them. But what we have noticed, is that if we power
off first one in the list in mount option - then everything freezes on a
client and in most cases we have to bring MDS back and do force-reset on
a client.
What are we doing wrong? Shouldn't client automatically switch to any
other MDS server in a microsecond and keep running? And why it affects
only first one? - We tried powering off second or third MDS host and
that had no effect on a client-side.
It should work, but can you explain how long you waited?
What does 'ceph -s' show you after the MDS failed? Does another MDS take
over the work and become the active MDS?
Wido
Should we have local MDS on the client and just connect to it? Or should
there be some other logic or setting? - to be honest, we have used
/ceph-//deploy/ tool and haven't did much of configuration..
Thank you for understanding and help. :)
Anton Aleksandrov.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com