(yet another) multi active mds advise needed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We're migrating from a Jewel / filestore based cephfs archicture to a Luminous / buestore based one.

One MUST HAVE is multiple Active MDS daemons. I'm still lacking knowledge of how it actually works.
After reading the docs and ML we learned that they work by sort of dividing the responsibilities, each with his own and only directory subtree. (please correct me if I'm wrong).

Question 1: I'd like to know if it is viable to have 4 MDS daemons, being 3 Active and 1 Standby (or Standby-Replay if that's still possible with multi-mds).

Basically, what we have is 2 subtrees used by dovecot: INDEX and MAIL.
Their tree is almost identical but INDEX stores all dovecot metadata with heavy IO going on and MAIL stores actual email files, with much more writes than reads.

I don't know by now which one could bottleneck the MDS servers most so I wonder if I can take metrics on MDS usage per pool when it's deployed.
Question 2: If the metadata workloads are very different I wonder if I can isolate them, like pinning MDS servers X and Y to one of the directories.

Cache Tier is deprecated so, 
Question 3: how can I think of a read cache mechanism in Luminous with bluestore, mainly to keep newly created files (emails that just arrived and will probably be fetched by the user in a few seconds via IMAP/POP3).

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
IRC NICK - WebertRLZ
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux