Re: very high ram usage by OSDs on Nautilus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok, assuming my math is right you've got ~14G of data in the mempools.


~6.5GB bluestore data

~1.8GB bluestore onode

~5GB bluestore other


Rest is other misc stuff.  That seems to be pretty inline with the numbers you posted in your screenshot. IE this doesn't appear to be a leak, but rather the bluestore caches are all using significantly more data than is typical given the default 4GB osd_memory_target.  You can check what an OSD's memory target it set to via the config show command (I'm using the admin socket here but you don't have to):

ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep '"osd_memory_target"'
    "osd_memory_target": "4294967296",

Mark


On 10/29/19 8:07 AM, Philippe D'Anjou wrote:
Ok looking at mempool, what does it tell me? This affects multiple OSDs, got crashes almost every hour.

{
    "mempool": {
        "by_pool": {
            "bloom_filter": {
                "items": 0,
                "bytes": 0
            },
            "bluestore_alloc": {
                "items": 2545349,
                "bytes": 20362792
            },
            "bluestore_cache_data": {
                "items": 28759,
                "bytes": 6972870656
            },
            "bluestore_cache_onode": {
                "items": 2885255,
                "bytes": 1892727280
            },
            "bluestore_cache_other": {
                "items": 202831651,
                "bytes": 5403585971
            },
            "bluestore_fsck": {
                "items": 0,
                "bytes": 0
            },
            "bluestore_txc": {
                "items": 21,
                "bytes": 15792
            },
            "bluestore_writing_deferred": {
                "items": 77,
                "bytes": 7803168
            },
            "bluestore_writing": {
                "items": 4,
                "bytes": 5319827
            },
            "bluefs": {
                "items": 5242,
                "bytes": 175096
            },
            "buffer_anon": {
                "items": 726644,
                "bytes": 193214370
            },
            "buffer_meta": {
                "items": 754360,
                "bytes": 66383680
            },
            "osd": {
                "items": 29,
                "bytes": 377464
            },
            "osd_mapbl": {
                "items": 50,
                "bytes": 3492082
            },
            "osd_pglog": {
                "items": 99011,
                "bytes": 46170592
            },
            "osdmap": {
                "items": 48130,
                "bytes": 1151208
            },
            "osdmap_mapping": {
                "items": 0,
                "bytes": 0
            },
            "pgmap": {
                "items": 0,
                "bytes": 0
            },
            "mds_co": {
                "items": 0,
                "bytes": 0
            },
            "unittest_1": {
                "items": 0,
                "bytes": 0
            },
            "unittest_2": {
                "items": 0,
                "bytes": 0
            }
        },
        "total": {
            "items": 209924582,
            "bytes": 14613649978
        }
    }
}



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux