Re: how to set load balance on multi active mds?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You might want to read up https://docs.ceph.com/en/pacific/cephfs/multimds/
The page contains info on dir pinning and related policies.

On Thu, Aug 10, 2023 at 12:11 PM Eugen Block <eblock@xxxxxx> wrote:
>
> Okay, you didn't mention that in your initial question. There was an
> interesting talk [3] at the Cephalocon in Amsterdam about an approach
> to combine dynamic and static pinning. But I don't know what the
> current status is. Regarding tuning options for the existing balancer
> I would hope that Gregory or Patrick could chime in here.
>
> [3] https://www.youtube.com/watch?v=pDURll6Y-Ug
>
> Zitat von zxcs <zhuxiongcs@xxxxxxx>:
>
> > Thanks a lot, Eugen!
> >
> > we are using dynamic subtree pinning, we have another cluster using
> > manual pinning, but we have many directory , and we need pin each
> > dir for each request. so in our new cluster, we want to try dynamic
> > subtree pinning. we don’t want to human kick in every time. Because
> > some A directory hot, and sometimes B directory hot.. each directory
> > has many subdirectory and sub-subdirectory...
> >
> > But we found the load not balance on all mds when we using dynamic
> > subtree pinning. So we want to know if any config we can tune for
> > the dynamic subtree pinning. Thanks again!
> >
> > Thanks,
> > xz
> >
> >> 2023年8月9日 17:40,Eugen Block <eblock@xxxxxx> 写道:
> >>
> >> Hi,
> >>
> >> you could benefit from directory pinning [1] or dynamic subtree
> >> pinning [2]. We had great results with manual pinning in an older
> >> Nautilus cluster, didn't have a chance to test the dynamic subtree
> >> pinning yet though. It's difficult to tell in advance which option
> >> would suit best your use-case, so you'll probably have to try.
> >>
> >> Regards,
> >> Eugen
> >>
> >> [1]
> >> https://docs.ceph.com/en/reef/cephfs/multimds/#manually-pinning-directory-trees-to-a-particular-rank
> >> [2]
> >> https://docs.ceph.com/en/reef/cephfs/multimds/#dynamic-subtree-partitioning-with-balancer-on-specific-ranks
> >>
> >> Zitat von zxcs <zhuxiongcs@xxxxxxx <mailto:zhuxiongcs@xxxxxxx>>:
> >>
> >>> Hi, experts,
> >>>
> >>> we have a  product env build with ceph version 16.2.11 pacific,
> >>> and using CephFS.
> >>> Also enable multi active mds(more than 10), but we usually see
> >>> load unbalance on our client request with these mds.
> >>> see below picture. the top 1 mds has 32.2k client request. and the
> >>> last one only 331.
> >>>
> >>> this always lead our cluster into very bad situation. say many MDS
> >>> report slow requests…
> >>>     ...
> >>>      7 MDSs report slow requests
> >>>      1 MDSs behind on trimming
> >>>     …
> >>>
> >>>
> >>> So our question is how to set those mdss load balance? Could any
> >>> one please help to shed some light here?
> >>> Thanks a ton!
> >>>
> >>>
> >>> Thanks,
> >>> xz
> >>>
> >>> _______________________________________________
> >>> ceph-users mailing list -- ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>
> >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>> <mailto:ceph-users-leave@xxxxxxx>
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx <mailto:ceph-users@xxxxxxx>
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >> <mailto:ceph-users-leave@xxxxxxx>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



-- 
Milind
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux