Re: CephFS MDS sizing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Sep 6, 2022 at 11:29 AM Vladimir Brik
<vladimir.brik@xxxxxxxxxxxxxxxx> wrote:
>
>  > What problem are you actually
>  > trying to solve with that information?
> I suspect that the mds_cache_memory_limit we set (~60GB) is
> sub-optimal and I am wondering if we would be better off if,
> say, we halved the cache limits and doubled the number of
> MDSes. I am looking for metrics to quantify this, and
> cache_hit_rate and others in "dump loads" seem relevant.

There are other indirect ways to measure cache effectiveness. Using
the mds `perf dump` command, you can
look at the objecter.omap_rd to see how often the MDS
goes out to directory objects to read dentries. You can also look at
the mds_mem.ino+ mds_mem.ino- to see how often
inodes go in and out of the cache.


--
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat, Inc.
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux