Re: MDS: cache pressure warnings with Ganesha exports

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jeff,

my ganesha instances are running on ubuntu 18.04 with packages from http://ppa.launchpad.net/nfs-ganesha/nfs-ganesha-3.0/ubuntu
Unfortunately they do not provide the debug package, so I cannot poke around due to the missing debug symbols. Do you have another approach to get the information needed?

I cloned https://github.com/nfs-ganesha/nfs-ganesha and tried to use the ganesha_stats.py script from there, but with the inode parameter I get:

' Error: Can't talk to ganesha service on d-bus. Looks like Ganesha is down'.

All other parameters are working (list_clients, etc.)

The cache pressure issue seems to be related to our tape backup we do over a kernel mount. During backup mds cache increases by ~50 Million items per hour. While the kernel mount seems to handle cache pressure in time, ganesha does not. Cache pressure warnings are always for sessions from ganesha.

To work around the issue I was wondering if its possible to mount the ceph filesystem without object caching on mds side. Backup does not really benefit from it, because every file (or it's metadata) is only read once.

Regards
Felix

PS: Sorry for the previous mail, forgot to disable signing
-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt
-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
 

Am 15.04.20, 14:57 schrieb "Jeff Layton" <jlayton@xxxxxxxxxx>:

    On Wed, 2020-04-15 at 12:06 +0000, Stolte, Felix wrote:
    > Hi Jeff,
    > 
    > Output of ganesha_stats inode:
    > 
    > Timestamp: Wed Apr 15 09:37:32 2020522216985 nsecs
    > Inode Cache Requests: 0
    > Inode Cache Hits: 361976
    > Inode Cache Misses: 0
    > Inode Cache Conflicts:: 2
    > Inode Cache Adds: 307597
    > Inode Cache Mapping: 615198
    > 
    
    LRU Utilization part is missing.
    
    Ahh ok...the interesting bit (the LRU Utilization part) was only added
    very recently (Feb 2020), so that won't be as helpful as I had hoped on
    older ganesha versions.
    
    One thing you could do is hook up the debugger to get the interesting
    part. Install the nfs-ganesha package debuginfo and:
    
    $ gdb /path/to/ganesha.nfsd
    gdb> p lru_state
    
    ...and collect what it prints. That'll stall the server for a bit while
    you're poking around in there though
    
    The entries_used field is what we're most interested in. Each entry
    holds a reference to a libcephfs Inode object, and that consumes
    resources on the MDS.
    
    We want to keep that value pretty low as a matter of course. With
    Entries_HWMark = 100, it should be below 100 most of the time. It can
    and does go above that though, so it'd be interesting to see what that
    looks like, particularly when the MDS is complaining about client
    resource utilization.
    
    -- 
    Jeff Layton <jlayton@xxxxxxxxxx>
    
    

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux