Re: MDS: cache pressure warnings with Ganesha exports

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jeff,

thank you for the hint. I set Entries_HWMark = 100 in MDCACHE Section of ganesha.conf and upgraded ganesha to 3.2 this weekend. Cache Pressure warnings still keep accuring, but not as frequent as before. Is there another suggestion I did miss?

Regards
Felix

-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Dr. Karl Eugen Huthmacher
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt
-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
 

Am 09.04.20, 14:10 schrieb "Jeff Layton" <jlayton@xxxxxxxxxx>:

    On Tue, 2020-04-07 at 07:34 +0000, Stolte, Felix wrote:
    > Hey folks,
    > 
    > I  keep getting ceph health warnings about clients failing to respond to cache pressure. They always refer to sessions from ganesha exports. I've read all threads regarding this issue, but none of my changes resolved it. What I’ve done so far:
    > 
    > Ganesha.conf:
    > MDCACHE {
    >     Dir_Chunk = 0;
    >     NParts = 1;
    >     Cache_Size = 1;
    > }
    > 
    > Attr_Expiration_Time = 0 in every export
    > 
    > mds_cache_memory_limit = 17179869184 on MDS Servers
    > 
    > I even set “client_oc = false” on the ganesha server, but this doesn’t seem to be applied
    > 
    > My setup is ceph version 14.2.8 (All Server and clients. One active MDS. Ganesha 2.8.3 runs on a dedicated Server.
    > 
    > ceph daemon mds.<active_mds> dump_mempools (filtered out the empty pools):
    > 
    > "mempool": {
    >         "by_pool": {
    >             "bloom_filter": {
    >                 "items": 1414723,
    >                 "bytes": 1414723
    >             },
    >             "buffer_anon": {
    >                 "items": 180992,
    >                 "bytes": 2404306271
    >             },
    >             "buffer_meta": {
    >                 "items": 178660,
    >                 "bytes": 15722080
    >             },
    >             "osdmap": {
    >                 "items": 4121,
    >                 "bytes": 75912
    >             },   
    >       "mds_co": {
    >                 "items": 221728924,
    >                 "bytes": 16320868177
    >             },
    >         },
    >         "total": {
    >             "items": 223507420,
    >             "bytes": 18742387163 
    > 
    > 
    > Every hint how to resolve the issue are welcome. If more information is needed, I am glad to provide it.
    > 
    
    You may want to follow the suggestion here and see if it helps:
    
        https://tracker.ceph.com/issues/44976#note-23
    
    -- 
    Jeff Layton <jlayton@xxxxxxxxxx>
    
    

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux