memory usage ceph jewel OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

in the last days I try to figure out why my OSDs needs a huge amount of
RAM. (1,2 - 4 GB). With this my System memory is on limit. At
beginning I thougt it is because of huge amount of backfilling (some
disks died). But now since a few days all is good but the memory keeps
at its level. Restarting of the OSDs did nothing on this behaviour. 

I running Ceph Jewel (10.2.6) on RedHat7. The cluster has 8 Hosts with
36 4TB OSDs each and 4 Hosts with 15 4 TB OSDs

I tried to profile the used memory like documented here: 
http://docs.ceph.com/docs/jewel/rados/troubleshooting/memory-profiling/

But the output of this commands didn't help me. But I am confused about
the used memory.

from ceph tell osd.98 heap dump I get the following output:
# ceph tell osd.98 heap dump
osd.98 dumping heap profile now.
------------------------------------------------
MALLOC:     1290458456 ( 1230.7 MiB) Bytes in use by application
MALLOC: +            0 (    0.0 MiB) Bytes in page heap freelist
MALLOC: +     63583000 (   60.6 MiB) Bytes in central cache freelist
MALLOC: +      5896704 (    5.6 MiB) Bytes in transfer cache freelist
MALLOC: +    102784400 (   98.0 MiB) Bytes in thread cache freelists
MALLOC: +     11350176 (   10.8 MiB) Bytes in malloc metadata
MALLOC:   ------------
MALLOC: =   1474072736 ( 1405.8 MiB) Actual memory used (physical +
swap) MALLOC: +    129064960 (  123.1 MiB) Bytes released to OS (aka
unmapped) MALLOC:   ------------
MALLOC: =   1603137696 ( 1528.9 MiB) Virtual address space used
MALLOC:
MALLOC:          88305              Spans in use
MALLOC:           1627              Thread heaps in use
MALLOC:           8192              Tcmalloc page size
------------------------------------------------
Call ReleaseFreeMemory() to release freelist memory to the OS (via
madvise()). Bytes released to the OS take up virtual address space but
no physical memory.


I would say the application needs 1230.7 MB of RAM. But if I analyse
the corresponding dump whit pprof The are only a few Megabytes
mentioned. Follwing the first few lines of pprof:

# pprof --text /usr/bin/ceph-osd osd.98.profile.0002.heap 
Using local file /usr/bin/ceph-osd.
Using local file osd.98.profile.0002.heap.
Total: 8.9 MB
     3.3  36.7%  36.7%      3.3  36.7% ceph::log::Log::create_entry
     2.3  25.5%  62.2%      2.3  25.5% ceph::buffer::list::append@a1f280
     1.1  12.1%  74.3%      2.0  23.1% SimpleMessenger::add_accept_pipe
     0.9  10.4%  84.7%      0.9  10.5% Pipe::Pipe
     0.2   2.8%  87.5%      0.2   2.8% std::map::operator[]
     0.2   2.2%  89.7%      0.2   2.2% std::vector::_M_default_append
     0.2   1.8%  91.5%      0.2   1.8% std::_Rb_tree::_M_copy
     0.1   0.8%  92.4%      0.1   0.8% ceph::buffer::create_aligned
     0.1   0.8%  93.2%      0.1   0.8% std::string::_Rep::_S_create


Is this normal? Do I do something wrong? Is there a Bug? Why need my
OSDs so much RAM?

Thanks for your help

Regards,
Manuel

-- 
Manuel Lausch

Systemadministrator
Cloud Services

1&1 Mail & Media Development & Technology GmbH | Brauerstraße 48 |
76135 Karlsruhe | Germany Phone: +49 721 91374-1847
E-Mail: manuel.lausch@xxxxxxxx | Web: www.1und1.de

Amtsgericht Montabaur, HRB 5452

Geschäftsführer: Frank Einhellinger, Thomas Ludwig, Jan Oetjen


Member of United Internet

Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte
Informationen enthalten. Wenn Sie nicht der bestimmungsgemäße Adressat
sind oder diese E-Mail irrtümlich erhalten haben, unterrichten Sie
bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem
bestimmungsgemäßen Adressaten ist untersagt, diese E-Mail zu speichern,
weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu
verwenden.

This e-mail may contain confidential and/or privileged information. If
you are not the intended recipient of this e-mail, you are hereby
notified that saving, distribution or use of the content of this e-mail
in any way is prohibited. If you have received this e-mail in error,
please notify the sender and delete the e-mail.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux