On Thu, May 10, 2018 at 7:38 PM, João Paulo Sacchetto Ribeiro Bastos <joaopaulosr95@xxxxxxxxx> wrote: > Hello guys, > > My company is about to rebuild its whole infrastructure, so I was called in > order to help on the planning. We are essentially an corporate mail > provider, so we handle daily lots of clients using dovecot and roundcube and > in order to do so we want to design a better plant of our cluster. Today, > using Jewel, we have a single cephFS for both index and mail from dovecot, > but we want to split it into an index_FS and a mail_FS to handle the > workload a little better, is it profitable nowadays? From my research I > realized that we will need data and metadata individual pools for each FS > such as a group of MDS for each of then, also. > > The one thing that really scares me about all of this is: we are planning to > have four machines at full disposal to handle our MDS instances. We started > to think if an idea like the one below is valid, can anybody give a hint on > this? We basically want to handle two MDS instances on each machine (one for > each FS) and wonder if we'll be able to have them swapping between active > and standby simultaneously without any trouble. > > index_FS: (active={machines 1 and 3}, standby={machines 2 and 4}) > mail_FS: (active={machines 2 and 4}, standby={machines 1 and 3}) Nothing wrong with that setup, but remember that those servers are going to have to be well-resourced enough to run all four at once (when a failure occurs), so it might not matter very much exactly which servers are running which daemons. With a filesystem's MDS daemons (i.e. daemons with the same standby_for_fscid setting), Ceph will activate whichever daemon comes up first, so if it's important to you to have particular daemons active then you would need to take care of that at the point you're starting them up. John > > Regards, > -- > > João Paulo Sacchetto Ribeiro Bastos > +55 31 99279-7092 > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com