Re: multiple active MDS servers is OK for production Ceph clusters OR Not

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We have had excellent results with multi-MDS - *after* we pinned every
directory. directory migrations caused so much load that it was
frequently no faster than a single MDS. This was on Nautilus at the
time. The hard limit on strays is also per-MDS, so we ended up
splitting to more MDS' to buy some time there. From what I can tell,
snapshots don't get moved if you change a pin, at least not
immediately, so keep that in mind.

On Wed, Nov 17, 2021 at 8:12 AM Eugen Block <eblock@xxxxxx> wrote:
>
> Hi,
>
> in this thread [1] Dan gives very helpful points to consider regarding
> multi-active MDS. Are you sure you need that?
> One of our customers has tested such a setup extensively with
> directory pinning because the MDS balancer couldn't handle the high
> client load. In order to better utilize the MDS servers (only one
> thread) they ran multiple daemons per server which also worked quite
> well. The only issue is the rolling upgrade where you need to reduce
> the max_mds to 1 which doesn't work here. But this is all still in
> Nautilus.
>
> [1]
> https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/message/4LNXN6U5DTB2BFPBDGDUKTEB4THD7HCH/
>
>
> Zitat von huxiaoyu@xxxxxxxxxxxx:
>
> > Dear Cephers,
> >
> > On reading a technical blog from Croit:
> > https://croit.io/blog/ceph-performance-test-and-optimization
> >
> > It says the following: "It should be noted that it is still debated
> > whether a configuration with multiple active MDS servers is OK for
> > production Ceph clusters."
> >
> > Just wonder whether multiple active MDS servers is OK for production
> > Ceph clusters OR Not?  Does any one has practical lessons ? Can some
> > one share more stories with us, whether success or failure
> >
> > Thanks a lot in advance,
> >
> > samuel
> >
> >
> >
> >
> >
> > huxiaoyu@xxxxxxxxxxxx
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux