Re: Cephfs mds cache tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You might find something by looking at the MDS server with perf:

   perf top --pid $(pidof ceph-mds)

as the simpelst command to get started. If you can catch it during a
period of blocked requests/not doing anything, you might be able to
see what it is actually doing and figure out something from there.

But that might also yield nothing if it's not blocked on anything CPU-intensive.

Paul

Am Mo., 1. Okt. 2018 um 06:02 Uhr schrieb Adam Tygart <mozes@xxxxxxx>:
>
> Hello all,
>
> I've got a ceph (12.2.8) cluster with 27 servers, 500 osds, and 1000
> cephfs mounts (kernel client). We're currently only using 1 active
> mds.
>
> Performance is great about 80% of the time. MDS responses (per ceph
> daemonperf mds.$(hostname -s), indicates 2k-9k requests per second,
> with a latency under 100.
>
> It is the other 20ish percent I'm worried about. I'll check on it and
> it with be going 5-15 seconds with "0" requests, "0" latency, then
> give me 2 seconds of reasonable response times, and then back to
> nothing. Clients are actually seeing blocked requests for this period
> of time.
>
> The strange bit is that when I *reduce* the mds_cache_size, requests
> and latencies go back to normal for a while. When it happens again,
> I'll increase it back to where it was. It feels like the mds server
> decides that some of these inodes can't be dropped from the cache
> unless the cache size changes. Maybe something wrong with the LRU?
>
> I feel like I've got a reasonable cache size for my workload, 30GB on
> the small end, 55GB on the large. No real reason for a swing this
> large except to potentially delay it recurring after expansion for
> longer.
>
> I also feel like there is probably some magic tunable to change how
> inodes get stuck in the LRU. perhaps mds_cache_mid. Anyone know what
> this tunable actually does? The documentation is a little sparse.
>
> I can grab logs from the mds if needed, just let me know the settings
> you'd like to see.
>
> --
> Adam
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux