Hi Jeff, Output of ganesha_stats inode: Timestamp: Wed Apr 15 09:37:32 2020522216985 nsecs Inode Cache Requests: 0 Inode Cache Hits: 361976 Inode Cache Misses: 0 Inode Cache Conflicts:: 2 Inode Cache Adds: 307597 Inode Cache Mapping: 615198 Best regards Felix ------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------- Forschungszentrum Juelich GmbH 52425 Juelich Sitz der Gesellschaft: Juelich Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt ------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------- Am 14.04.20, 21:26 schrieb "Jeff Layton" <jlayton@xxxxxxxxxx>: On Tue, 2020-04-14 at 06:27 +0000, Stolte, Felix wrote: > Hi Jeff, > > thank you for the hint. I set Entries_HWMark = 100 in MDCACHE Section > of ganesha.conf and upgraded ganesha to 3.2 this weekend. Cache > Pressure warnings still keep accuring, but not as frequent as before. > Is there another suggestion I did miss? > Great. If it doesn't help, then it would be good to see the "ganesha_stats inode" output. That script is part of the nfs-ganesha-utils package. The output from my (idle) server looks like this: ----------------8<------------------- $ ./ganesha_stats.py inode Timestamp: Tue Apr 14 15:24:18 2020795806304 nsecs Inode Cache statistics Cache Requests: 0 Cache Hits: 2412 Cache Misses: 0 Cache Conflicts: 0 Cache Adds: 193 Cache Mapping: 386 LRU Utilization Data FSAL opened FD count : 0 System limit on FDs : 524288 FD usage : Below Low Water Mark LRU entries in use : 12 Chunks in use : 0 ----------------8<------------------- That should tell us something about the cached inodes that ganesha is holding onto. -- Jeff Layton <jlayton@xxxxxxxxxx> _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx