Does anyone else still experiancing memory issues with 12.2.2 and Bluestore?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I know that 12.2.2 should have fixed all memory leak issues with bluestore but we still experiencing some odd behavior.

Our osd flaps once in a while... sometimes it doesn't stop until we restart all osds on all nodes/on the same server...
in our syslog we see messages like this "failed: Cannot allocate memory" on all kind of processes...

In addition, sometimes we get this error while trying to work with ceph commands:
Traceback (most recent call last):
  File "/usr/bin/ceph", line 125, in <module>
    import rados
ImportError: libceph-common.so.0: cannot map zero-fill pages

seems like a memory leak issue...when we restart all osds this behavior stops for few hours/days.
we have 8 osd servers with 16 ssd disks on each and 64GB of ram. bluestore cache set to default (3GB for ssd)

the result is our cluster is almost constantly in rebuilds and that impacts performance.

root@ecprdbcph10-opens:~# ceph daemon osd.1 dump_mempools
{
    "bloom_filter": {
        "items": 0,
        "bytes": 0
    },
    "bluestore_alloc": {
        "items": 5105472,
        "bytes": 5105472
    },
    "bluestore_cache_data": {
        "items": 68868,
        "bytes": 1934663680
    },
    "bluestore_cache_onode": {
        "items": 152640,
        "bytes": 102574080
    },
    "bluestore_cache_other": {
        "items": 16920009,
        "bytes": 371200513
    },
    "bluestore_fsck": {
        "items": 0,
        "bytes": 0
    },
    "bluestore_txc": {
        "items": 3,
        "bytes": 2160
    },
    "bluestore_writing_deferred": {
        "items": 33,
        "bytes": 265015
    },
    "bluestore_writing": {
        "items": 19,
        "bytes": 6403820
    },
    "bluefs": {
        "items": 303,
        "bytes": 12760
    },
    "buffer_anon": {
        "items": 32958,
        "bytes": 14087657
    },
    "buffer_meta": {
        "items": 68996,
        "bytes": 6071648
    },
    "osd": {
        "items": 187,
        "bytes": 2255968
    },
    "osd_mapbl": {
        "items": 0,
        "bytes": 0
    },
    "osd_pglog": {
        "items": 514238,
        "bytes": 152438172
    },
    "osdmap": {
        "items": 35699,
        "bytes": 823040
    },
    "osdmap_mapping": {
        "items": 0,
        "bytes": 0
    },
    "pgmap": {
        "items": 0,
        "bytes": 0
    },
    "mds_co": {
        "items": 0,
        "bytes": 0
    },
    "unittest_1": {
        "items": 0,
        "bytes": 0
    },
    "unittest_2": {
        "items": 0,
        "bytes": 0
    },
    "total": {
        "items": 22899425,
        "bytes": 2595903985
    }
}


Any help would be appreciated.
Thank you


--

Tzachi Strul

Storage DevOps // Kenshoo

Office +972 73 2862-368 // Mobile +972 54 755 1308

Kenshoo logo


This e-mail, as well as any attached document, may contain material which is confidential and privileged and may include trademark, copyright and other intellectual property rights that are proprietary to Kenshoo Ltd,  its subsidiaries or affiliates ("Kenshoo"). This e-mail and its attachments may be read, copied and used only by the addressee for the purpose(s) for which it was disclosed herein. If you have received it in error, please destroy the message and any attachment, and contact us immediately. If you are not the intended recipient, be aware that any review, reliance, disclosure, copying, distribution or use of the contents of this message without Kenshoo's express permission is strictly prohibited.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux