Re: (yet another) multi active mds advise needed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Patrick

On Fri, May 18, 2018 at 6:20 PM Patrick Donnelly <pdonnell@xxxxxxxxxx> wrote:
Each MDS may have multiple subtrees they are authoritative for. Each
MDS may also replicate metadata from another MDS as a form of load
balancing.

Ok, its good to know that it actually does some load balance. Thanks.
New question: will it make any difference in the balancing if instead of having the MAIL directory in the root of cephfs and the domains's subtrees inside it,
I discard the parent dir and put all the subtress right in cephfs root?
 
standby-replay daemons are not available to take over for ranks other
than the one it follows. So, you would want to have a standby-replay
daemon for each rank or just have normal standbys. It will likely
depend on the size of your MDS (cache size) and available hardware.

It's best if y ou see if the normal balancer (especially in v12.2.6
[1]) can handle the load for you without trying to micromanage things
via pins. You can use pinning to isolate metadata load from other
ranks as a stop-gap measure.

Ok I will start with the simplest way. This can be changed after deployment if it comes to be the case.

On Fri, May 18, 2018 at 6:38 PM Daniel Baumann <daniel.baumann@xxxxxx> wrote:
jftr, having 3 active mds and 3 standby-replay resulted May 20217 in a
longer downtime for us due to http://tracker.ceph.com/issues/21749

we're not using standby-replay MDS's anymore but only "normal" standby,
and didn't have had any problems anymore (running kraken then, upgraded
to luminous last fall).

Thank you very much for your feedback Daniel. I'll go for the regular standby daemons, then.

Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
Belo Horizonte - Brasil
IRC NICK - WebertRLZ


 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux