MDS_CACHE_OVERSIZED, what is this a symptom of?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So I'm getting this warning (although there are no noticeable problems in the cluster):

$ ceph health detail
HEALTH_WARN 1 MDSs report oversized cache
[WRN] MDS_CACHE_OVERSIZED: 1 MDSs report oversized cache
    mds.storefs-b(mds.0): MDS cache is too large (7GB/4GB); 0 inodes in use by clients, 0 stray files

Ceph FS status:

$ ceph fs status
storefs - 20 clients
=======
RANK      STATE          MDS        ACTIVITY     DNS    INOS   DIRS   CAPS  
 0        active      storefs-a  Reqs:    0 /s  1385k  1385k   113k   193k  
0-s   standby-replay  storefs-b  Evts:    0 /s  3123k  3123k  33.5k     0   
      POOL          TYPE     USED  AVAIL  
storefs-metadata  metadata  19.4G  12.6T  
 storefs-pool4x     data    4201M  9708G  
 storefs-pool2x     data    2338G  18.9T  
MDS version: ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)

What is telling me? Is it just that case of the cache size needing to be bigger? Is it a problem with the clients holding onto some kind of reference (documentation says this can be a cause, but now how to check for it).

Thanks in advance,
Pedro Lopes
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux