Hi cephers, Every so often we have a ceph-fuse process that grows to rather large size (up to eating up the whole memory of the machine). Here is an example of a 200GB RSS size ceph-fuse instance: # ceph daemon /var/run/ceph/ceph-client.admin.asok dump_mempools { "bloom_filter": { "items": 0, "bytes": 0 }, "bluestore_alloc": { "items": 0, "bytes": 0 }, "bluestore_cache_data": { "items": 0, "bytes": 0 }, "bluestore_cache_onode": { "items": 0, "bytes": 0 }, "bluestore_cache_other": { "items": 0, "bytes": 0 }, "bluestore_fsck": { "items": 0, "bytes": 0 }, "bluestore_txc": { "items": 0, "bytes": 0 }, "bluestore_writing_deferred": { "items": 0, "bytes": 0 }, "bluestore_writing": { "items": 0, "bytes": 0 }, "bluefs": { "items": 0, "bytes": 0 }, "buffer_anon": { "items": 51534897, "bytes": 207321872398 }, "buffer_meta": { "items": 64, "bytes": 5632 }, "osd": { "items": 0, "bytes": 0 }, "osd_mapbl": { "items": 0, "bytes": 0 }, "osd_pglog": { "items": 0, "bytes": 0 }, "osdmap": { "items": 28593, "bytes": 431872 }, "osdmap_mapping": { "items": 0, "bytes": 0 }, "pgmap": { "items": 0, "bytes": 0 }, "mds_co": { "items": 0, "bytes": 0 }, "unittest_1": { "items": 0, "bytes": 0 }, "unittest_2": { "items": 0, "bytes": 0 }, "total": { "items": 51563554, "bytes": 207322309902 } } The general cache size looks like this (if it is helpful I can put a whole cache dump somewhere): # ceph daemon /var/run/ceph/ceph-client.admin.asok dump_cache | grep path | wc -l 84085 # ceph daemon /var/run/ceph/ceph-client.admin.asok dump_cache | grep name | wc -l 168186 Any ideas what 'buffer_anon' is and what could be eating up the 200GB of RAM? We are running with a few ceph-fuse specific parameters increased in ceph.conf: # Description: Set the number of inodes that the client keeps in the metadata cache. We are running a 12.2.7 ceph cluster, and the cluster is otherwise healthy. Any hints would be appreciated. Thanks, Andras |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com