Re: Best value for "mds_cache_memory_limit" for large (more than 10 Po) cephfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I’d say one thing to keep in mind is the higher you have your cache, and
the more that is currently consumed, the LONGER it will take in the event
the reply has to take over…

While standby-reply does help to improve takeover times, its not
significant if there is a lot of clients with a lot of open caps.

We are using in the 40GB cache after ramping up a bit a time to help with
recalls.  But when I failover now I’m looking at 1-3 minutes, with or
without standby-replay enabled.

Do some testing with failovers if you have the ability to ensure that your
timings are OK, too big can cause issues in that area, that I know of…

Robert

On Wed, Jun 29, 2022 at 6:54 AM Eugen Block <eblock@xxxxxx> wrote:

> Hi,
>
> you can check how much your MDS is currently using:
>
> ceph daemon mds.<MDS> cache status
>
> Does it already scratch your limit? I usually start with lower values
> if it's difficult to determine how much it will actually use and
> increase it if necessary.
>
> Zitat von Arnaud M <arnaud.meauzoone@xxxxxxxxx>:
>
> > Hello to everyone
> >
> > I have a ceph cluster currently serving cephfs.
> >
> > The size of the ceph filesystem is around 1 Po.
> > 1 Active mds and 1 Standby-replay
> > I do not have a lot of cephfs clients for now 5 but it may increase to 20
> > or 30.
> >
> > Here is some output
> >
> > Rank  | State          | Daemon                | Activity     | Dentries
> |
> > Inodes  | Dirs    | Caps
> >
> > 0     | active         | ceph-g-ssd-4-2.mxwjvd | Reqs: 130 /s | 10.2 M
>  |
> > 10.1 M  | 356.8 k | 707.6 k
> >
> > 0-s   | standby-replay | ceph-g-ssd-4-1.ixqewp | Evts: 0 /s   | 156.5 k
> |
> > 127.7 k | 47.4 k  |  0
> >
> > It is working really well
> >
> > I plan to to increase this cephfs cluster up to 10 Po (for now) and even
> > more
> >
> > What would be the good value for "mds_cache_memory_limit" ? I have set it
> > to 80 Gb because I have enough ram on my server to do so.
> >
> > Was it a good idea ? Or is it counter-productive ?
> >
> > All the best
> >
> > Arnaud
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux