Re: Unexplainable high memory usage OSD with BlueStore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi,

I want to add also a memory problem.

What we have:

* Ceph version 12.2.11
* 5 x 512MB Samsung 850 Evo
* 5 x 1TB WD Red (5.4k)
* OS Debian Stretch ( Proxmox VE 5.x )
* 2 x CPU CPU E5-2620 v4
* Memory 64GB DDR4

I've added to ceph.conf

...

[osd]
  osd memory target = 3221225472
...

Which is active:


===================
# ceph daemon osd.31 config show | grep memory_target
    "osd_memory_target": "3221225472",
===================

Problem is, that the OSD processes eating my memory:

==============
# free -h
total used free shared buff/cache available Mem: 62G 52G 7.8G 693M 2.2G 50G
Swap:          8.0G        5.8M        8.0G
==============

As example osd.31, which is a HDD (WD Red)


==============
# ceph daemon osd.31 dump_mempools

...

    "bluestore_alloc": {
        "items": 40379056,
        "bytes": 40379056
    },
    "bluestore_cache_data": {
        "items": 1613,
        "bytes": 130048000
    },
    "bluestore_cache_onode": {
        "items": 64888,
        "bytes": 43604736
    },
    "bluestore_cache_other": {
        "items": 7043426,
        "bytes": 209450352
    },
...
    "total": {
        "items": 48360478,
        "bytes": 633918931
    }
=============


=============
# ps -eo pmem,pcpu,vsize,pid,cmd | sort -k 1 -nr | head -30
6.5 1.8 5040944 6594 /usr/bin/ceph-osd -f --cluster ceph --id 31 --setuser ceph --setgroup ceph 6.4 2.4 5053492 6819 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser ceph --setgroup ceph 6.4 2.3 5044144 5454 /usr/bin/ceph-osd -f --cluster ceph --id 4 --setuser ceph --setgroup ceph 6.2 1.9 4927248 6082 /usr/bin/ceph-osd -f --cluster ceph --id 5 --setuser ceph --setgroup ceph 6.1 2.2 4839988 7684 /usr/bin/ceph-osd -f --cluster ceph --id 3 --setuser ceph --setgroup ceph 6.1 2.1 4876572 8155 /usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph 5.9 1.3 4652608 5760 /usr/bin/ceph-osd -f --cluster ceph --id 32 --setuser ceph --setgroup ceph 5.8 1.9 4699092 8374 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph 5.8 1.4 4562480 5623 /usr/bin/ceph-osd -f --cluster ceph --id 30 --setuser ceph --setgroup ceph 5.7 1.3 4491624 7268 /usr/bin/ceph-osd -f --cluster ceph --id 34 --setuser ceph --setgroup ceph 5.5 1.2 4430164 6201 /usr/bin/ceph-osd -f --cluster ceph --id 33 --setuser ceph --setgroup ceph 5.4 1.4 4319480 6405 /usr/bin/ceph-osd -f --cluster ceph --id 29 --setuser ceph --setgroup ceph 1.0 0.8 1094500 4749 /usr/bin/ceph-mon -f --cluster ceph --id fc-r02-ceph-osd-01 --setuser ceph --setgroup ceph 0.2 4.8 948764 4803 /usr/bin/ceph-mgr -f --cluster ceph --id fc-r02-ceph-osd-01 --setuser ceph --setgroup ceph
=================

After a reboot, the nodes uses round about 30GB, but over a month its again over 50GB and growing.

Any suggestions ?

cu denny
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux