Re: Is it possible to suggest the active MDS to move to a datacenter ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If your cluster needs both datacenters to operate, then I wouldn't really worry about where you active MDS is running.  OTOH, if you're set on having the active MDS be in 1 DC or the other, you could utilize some external scripting to see if the active MDS is in DC #2 while an MDS for DC #1 is in standby and then if that's the case, issue the command `ceph mds fail $mds` where $mds would be run for each MDS server in DC #2.  If all MDS servers in DC #2 are told to fail and start talking to the cluster again, then an MDS in DC #1 would likely become the active MDS.

All of that said, I still don't think it will really help much.  You can of course test to see if you see any difference in operations depending on where the active MDS server is.

On Fri, Mar 30, 2018 at 1:58 AM Nicolas Huillard <nhuillard@xxxxxxxxxxx> wrote:
Thanks for your answer.

Le jeudi 29 mars 2018 à 13:51 -0700, Patrick Donnelly a écrit :
> On Thu, Mar 29, 2018 at 1:02 PM, Nicolas Huillard <nhuillard@dolomede
> .fr> wrote:
> > I manage my 2 datacenters with Pacemaker and Booth. One of them is
> > the
> > publicly-known one, thanks to Booth.
> > Whatever the "public datacenter", Ceph is a single storage cluster.
> > Since most of the cephfs traffic come from this "public
> > datacenter",
> > I'd like to suggest or force the active MDS to move to the same
> > datacenter, hoping to reduce trafic on the inter-datacenter link,
> > and
> > reduce cephfs metadata operations latency.
> >
> > Is it possible for forcefully move the active MDS using external
> > triggers ?
>
> No and it probably wouldn't be beneficial. The MDS still needs to
> talk
> to the metadata/data pools and increasing the latency between the MDS
> and the OSDs will probably do more harm.

It wasn't clear in my first post: OSDs are already split between both
DCs, so having the MDS on either side has the same effect on MDS-OSD
traffic. It appears that my current usage profile generates load on the
 MDS, but not that much on OSD-metadata.
The public DC is just the one of the two that Booth gives its ticket
to.

> One possibility for helping your situation is to put NFS-Ganesha in
> the public datacenter as a gateway to CephFS. This may help with your
> performance by (a) sharing a larger cache among multiple clients and
> (b) reducing capability conflicts between clients thereby resulting
> in
> less metadata traffic with the MDS. Be aware an HA solution doesn't
> yet exist for NFS-Ganesha+CephFS outside of Openstack Queens
> deployments.

I'll keep it stupid-simple then, just use the cephfs client, and
monitor the usage profile of things ;-)

--
Nicolas Huillard
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux